Skip to main content

Scraping Hidden Data

I'm genuinely excited to write this particular entry, for two reasons.  One is that its been some time since I've jumped back in to NodeJS, but the other because is because it is my first attempt at writing something using Puppeteer.  I needed to scrape information about products on a particular website, and through my exploration, on each iteration of the page parameter in the query string, in the Chrome Developer Tools, there was an API request for the granular information that I needed for the products.

It was extremely difficult to work out just how this URL was being constructed in order to know what to request for, but I quickly came to a conclusion that if there was information in the developer tools, then that could be somehow extracted - and remembered this little tool existed - so I hopped to work!

So the premise came across as being rather simple - intercept the browser making the requests that you find in the network tab.  There was one path that I knew how it was formatted at the start of the string, just not the product ID's at the end of which I was scraping for.  So I first came up with the following initial function (after much research!)
Apologies, this is a stripped down version of a finished result, but let me walk you through it...

We are launching the puppeteer and enabling setRequestInterception (I appreciate this needs more documentation).  When we goto a page, all requests for resources by the page will be triggered in our request event, and if the url has got the string we've specified in the path, we're capturing it.  In this example, I am iterating through all pages until a page which does not request this particular API has stopped.

Of course - the resulting API endpoints I'm capturing here is then processed in a separate task, and the result of which contains JSON which I am after to harvest.

As always, hope this inspires.

Comments

Popular posts from this blog

Running NodeJS Serverless Locally

 So it's been a long time, but I thought this was a neat little trick so I thought I'd share it with the world - as little followers as I have.  In my spare time I've been writing up a new hobby project in Serverless , and while I do maintain a staging and production environment in AWS, it means I need to do a deployment every time I want to test all of the API's I've drafted for it. Not wanting to disturb the yaml configuration for running it locally, I've come up with a simple outline of a server which continues to use the same configuration.  Take the express driven server I first define here: And then put a index.js  in your routes folder to contain this code: Voila! This will take the request from your localhost and interpret the path against your serverless.yml and run the configured function.  Hope this helps someone!

question2answer Wordpress Integration

 Today I want to journal my implementation of a WordPress site with the package of "question2answer".  It comes as self-promoted as being able to integrate with WordPress "out of the box".  I'm going to vent a small amount of frustration here, because the only integration going on is the simplicity of configuration with using the same database, along with the user authentication of WordPress.  Otherwise they run as two separate sites/themes. This will not do. So let's get to some context.  I have a new hobby project in mind which requires a open source stack-overflow clone.  Enter question2answer .  Now I don't want to come across as completely ungrateful, this package - while old, ticks all the boxes and looks like it was well maintained, but I need every  page to look the same to have a seamless integration.  So, let's go through this step by step. Forum Index Update This step probably  doesn't need to be done, but I just wanted to mak...

Getting all deltas from Auth0

Before I get in to the solution of this article, let me tell you how it started and fill you in on the problem that arose.  I wrote a procedure to get daily deltas of users - those of which who had created/updated their account on the given day (and including the day before for good measure on the GMT timestamp).  The simple search criteria was just the following: updated_at:[yyyy-mm-dd TO yyyy-mm-dd] Simple, right?  the []'s being the dates are inclusive, while using {} would mean exclusively.  Auth0 lets you mix these on either side depending on your use.  While this is all well and good, Auth0 will limit the number of results (even with paging) to 1000 only. So, your first option is that you could have your procedure create a user export job, and then parsing through the results and eliminating those which do not meet your updated_at search criteria.  I can tell you first hand that eventually the amount of users will just get to be too much and cumb...