Skip to main content

Liberating Light Data

On to a new topic today, we've all heard the buzz about "Big Data" and how we can tackle querying in lightning fast speeds, but what are some of the best ways of working with highly mutable but tiny data sets? I'll lead you in by example on this one - I had recently had a custom developed PIM (Product Information Management) system, but also part of this system, I also kept a database of our stores and their respective catalogue versions.

For a while, it was good - until it wasn't. It didn't scale that effectively in the long term. The PIM should have only been just that, not a hybrid of extraneous bits and pieces of data. So I'm going to fast forward to the solution its now been moved to - I've created a workbook in Google Sheets and tied the integrations to this. This not only scales incredibly well, but you also have the luxury of cell-by-cell history out of the box.

So, how to proceed? First you need to add the component using Composer:
Keep in mind that this package self-requires other packages which it will download and update your packages.  A little annoying as even these components by themselves seems to be too big for a PHP Lambda Function...

Next, you will need to create a service account in the Google Cloud Platform.  There is a well detailed guide which can be found here.  Keep the JSON file it creates and the email address safe.

From there, you can start to prepare the PHP wrapper class to interact with your sheet.
This is our construct method, and prepares you to be able to start using the API's off the _sheets object.  To then access a particular Google Sheet document, you need it's spreadsheet ID.  Open one in your browser and inspect the URL:

https://docs.google.com/spreadsheets/d/<LONG TOKEN HERE>/edit#gid=0

The token is what you will be using.  I find it useful to create a private array variable of my sheet "collections" for easy naming reference:

Now you can try and create a simple get method for easy access to your data:
That almost covers it - now you'll just need to share the spreadsheet to the email of the service account, and you're all set.  I've use query string parameters here, you'll be able to call it like this:

http://your-server.com/endpoint.php?sheet=sheet1&range=A1:G5

You should then get a JSON object of the data contained in these rows.  Hope everything goes well for you and I've saved you a few hours in research!

Cheers,
Jess. 

Comments

Popular posts from this blog

question2answer Wordpress Integration

 Today I want to journal my implementation of a WordPress site with the package of "question2answer".  It comes as self-promoted as being able to integrate with WordPress "out of the box".  I'm going to vent a small amount of frustration here, because the only integration going on is the simplicity of configuration with using the same database, along with the user authentication of WordPress.  Otherwise they run as two separate sites/themes. This will not do. So let's get to some context.  I have a new hobby project in mind which requires a open source stack-overflow clone.  Enter question2answer .  Now I don't want to come across as completely ungrateful, this package - while old, ticks all the boxes and looks like it was well maintained, but I need every  page to look the same to have a seamless integration.  So, let's go through this step by step. Forum Index Update This step probably  doesn't need to be done, but I just wanted to make sure th

Machine Learning: Teaching Wisdom of the Crowd

I got lost in an absolute myriad of thoughts the other day, and it essentially wound up wondering if we can teach machines to count, beyond of what it can see in an image, and I've come up with a small experiment that I would absolutely love to collaborate on if anyone (@ Google ?) else is interested. The idea is based on  the concept of the experiments performed using " Wisdom of the Crowd ", commonly in this experiment to use a jar of jelly beans and asking many people to make a guess as to how many is in there.  Machine learning can be used to make predictions from patterns, but it would have nothing to gain looking at one picture of a jelly bean jar to the next and being able to correctly identify that is in fact - a jar of jelly beans. But suppose we feed it several images of jars of jelly beans, along with all of the guesses people have made of how many is in there.  Can we then presume that feeding it a new image, it would be able to give us a fairly accurate c

WooCommerce: Controlling an Asset CDN

Continuing on from my last post , I faced a new issue when it came to adding products and the associated images I was putting in (from Cloudinary ) was getting uploaded to the WordPress media library. Not only that, using the URL from my site instead of the CDN it had come from. Double up on all of my images, what a waste - and I want to host from the CDN to keep costs of bandwidth down.  So let me show you how I overcame it. Separating the herd What was interesting, is that it was keeping a record of the original source location, and I found I could filter these apart from the rest of my media library: With this in mind, I wrote a function around it so I could use it to give me a true/false if the given attachment was from this source. Attaching the hook Next, needed a way that as soon as an image was added, that it would update the attachment (post) pointing to the correct reference, and not to the file on our server. I found the add_attachment hook, which fires only whe