Skip to main content

Revitalising your Data

It's a brave new world, where data is king. Sometimes, this data can live in platforms which does not let you use it as effectively as you should.  Today, I'm going to take you on a way that you can programmatically extract and transfer to a much richer set of analysis tools.

The Task

I was handed a project to move all of the data from Campaign Monitor to Google Data Studio.  No surprise, this reporting platform had a massive list of data connectors, and after a little bit of research, I decided to go with Google Cloud Storage - it just seemed to be the most flexible.

The Extraction

I wouldn't be able to go in to great detail on how I've made my Campaign Monitor class, being that its Intellectual Property, but I can go in to where you can find the relavant documentation.  I was asked to focus on just 3 streams of data; Opens, Clicks and Unsubscribes.

Be sure to follow the Getting Started guide, using the most appropriate level of authentication.  Not being a web-application, I've gone with the API Key + Secret approach.  For each stream, you will want to work through a list of the individual campaigns, it is here where you can drill down on these three streams:
You'll notice that they common parameters, in particular the date, being when campaigns were delivered from.  From those results, you can then get a further breakdown and get the subscriber data for each recipient.

The Upload

Now things get interesting.  I'll aways highly reccomend to use Composer in your projects.   You'll need to include the Google Cloud-Storage and apiclient packages to continue.

Jump over to the GCP Console and create your project.  Navigate to the Storage and create a bucket, which is effectively a named root location. For authentication and ease of use, get yourself a client keyfile, and then we can get the magic start ot happen.  Initialize the object in this manner: Then, you can upload files just as easily:

Time to Report!

Now that you've got all of your files hosted, you can now connect to it as a data source.  Depending how you have stored your files in folders, you can easily flag a data source to use all files as one single model.  Keep in mind that they must all be structed in the same way.  I hope this inspires you to get your data in shape and explore new and interesting metrics you never knew you had!

Comments

Popular posts from this blog

question2answer Wordpress Integration

 Today I want to journal my implementation of a WordPress site with the package of "question2answer".  It comes as self-promoted as being able to integrate with WordPress "out of the box".  I'm going to vent a small amount of frustration here, because the only integration going on is the simplicity of configuration with using the same database, along with the user authentication of WordPress.  Otherwise they run as two separate sites/themes. This will not do. So let's get to some context.  I have a new hobby project in mind which requires a open source stack-overflow clone.  Enter question2answer .  Now I don't want to come across as completely ungrateful, this package - while old, ticks all the boxes and looks like it was well maintained, but I need every  page to look the same to have a seamless integration.  So, let's go through this step by step. Forum Index Update This step probably  doesn't need to be done, but I just wanted to make sure th

Machine Learning: Teaching Wisdom of the Crowd

I got lost in an absolute myriad of thoughts the other day, and it essentially wound up wondering if we can teach machines to count, beyond of what it can see in an image, and I've come up with a small experiment that I would absolutely love to collaborate on if anyone (@ Google ?) else is interested. The idea is based on  the concept of the experiments performed using " Wisdom of the Crowd ", commonly in this experiment to use a jar of jelly beans and asking many people to make a guess as to how many is in there.  Machine learning can be used to make predictions from patterns, but it would have nothing to gain looking at one picture of a jelly bean jar to the next and being able to correctly identify that is in fact - a jar of jelly beans. But suppose we feed it several images of jars of jelly beans, along with all of the guesses people have made of how many is in there.  Can we then presume that feeding it a new image, it would be able to give us a fairly accurate c

WooCommerce: Controlling an Asset CDN

Continuing on from my last post , I faced a new issue when it came to adding products and the associated images I was putting in (from Cloudinary ) was getting uploaded to the WordPress media library. Not only that, using the URL from my site instead of the CDN it had come from. Double up on all of my images, what a waste - and I want to host from the CDN to keep costs of bandwidth down.  So let me show you how I overcame it. Separating the herd What was interesting, is that it was keeping a record of the original source location, and I found I could filter these apart from the rest of my media library: With this in mind, I wrote a function around it so I could use it to give me a true/false if the given attachment was from this source. Attaching the hook Next, needed a way that as soon as an image was added, that it would update the attachment (post) pointing to the correct reference, and not to the file on our server. I found the add_attachment hook, which fires only whe