Autorefreshing and paginating remote Data Studio dashboards

I’d like to share with you a very nice and handy extension to Chrome, i.e. the Data Studio Auto Refresh extension. With this, you can have your Google Data Studio dashboards autorefreshing and auto paginating on a (remote) screen.

At my workplace we have one screen situated at the news desk. The screen previously showed a Data Studio dashboard with only one page. On this one page we showed the amount of registered users on our site, the NPS for our sites and the most recent comments on a feedback form on the site.

Following Mark Zuckerberg’s announcement last week about prioritising posts from family and friends over posts from pages, we realised we need to have a closer follow up for our Facebook posts as well. So I added a second page to the dashboard that shows Facebook reach and amount of engaged users:

screen_fb

Now the original page alternates with this page every couple of minutes. Thus the news desk can monitor the reach and user engagement of our Facebook posts and hopefully learn what makes Facebook’s algorithms tick. Over time we will of course need to conduct some proper analysis on the performance of the posts but for now this will give us some insight into the performance.

(This screen actually runs on a Raspberry Pi which I manage remotely as I sit on a another floor. Feels like playing with toys but is actually a very good and cheap solution to this simple need.)

Advertisements

Google Analytics and R for a news website

For a news site understanding the analytics is essential. The basic reporting provided by Google Analytics (GA) gives us good tools for monitoring the performance on a daily bases. Even the standard version of GA (which we use) offers a wide variety of reporting options which carries you a long way. However, when you have exhausted all these options and need more, you can either use some kind of tool like Supermetrics or then query the GA api directly. For the latter purpose, I’m using R.

Querying GA with R is a very powerful way to access the analytics data. Where GA only allows you to use two dimensions at the same time, using R you can query several dimensions and easily join different datasets to combine all your data into one large data set that you then can use for further analysis. Provided you know R of course – otherwise I suggest you use a tool like the above mentioned Supermetrics.

For querying GA with R I have used the package RGoogleAnalytics. There are other packages out there, but as for many other packages in R, this is the one I first stumbled upon and then continued using… And so far, I’m quite happy with it, so why change?!

Setting up R to work with GA is quite straight forward, you can find a nice post on it here.

Recently I needed to query GA for our main site’s (hbl.fi, a newssite about Finland in swedish) different measures such as sessions, users, pageviews but also some custom dimensions including author, publish date etc. The aim was to collate this data for last year and then run some analysis on it.

I started out querying the api for the basic information: date (for when the article was read), publish date (a custom dimension), page path, page title and pageviews. After this I queried several different custom dimension one by one and joined them in R with the first dataset. This is necessary as GA only returns rows where there are no NA:s. And as I know that our metadata sometimes is incomplete, this solution allows me to stitch together a dataset that is as complete as possible.

This is my basic query:

# Init() combines all the query parameters into a list that is passed as an argument to QueryBuilder()
query.list <- Init(start.date = "2017-01-01",
                  end.date = "2017-12-31",
                  dimensions = "ga:date,ga:pagePath,ga:pageTitle,ga:dimension13", 
                  metrics = "ga:sessions,ga:users,ga:pageviews,ga:avgTimeOnPage",
                  max.results = 99999,
                  sort = "ga:date",
                  table.id = "ga:110501343")

# Create the Query Builder object so that the query parameters are validated
ga.query <- QueryBuilder(query.list)

# Extract the data and store it in a data-frame
ga.data <- GetReportData(ga.query, token, split_daywise=T)

 

Note this in the Init()-function:

  • You can have a maximum of 7 dimensions and 10 metrics
  • The max.results can (according to my experience) be at the most 99,999 (at 100,000 you get an error).
  • table.id is called ViewID in your GA’s admin panel under View Settings
  • If you want to use a GA segment* in your query, add the following attribute: segments = “xxxx”

 

Note this in the GetReportData-function:

  • Use split_daywise = TRUE to minimize the sampling of GA.
  • If your data is sampled the output returns the percentage of sessions that were used for the query. Hence, if you get no message, the data is unsampled.

 

* Finding the segment id isn’t as easy as finding the table id. It isn’t visible from within Google Analytics (or at least I haven’t found it). The easiest way to do this is to use the query explorer tool provided by Google. This tool is actually meant to aid you in creating api query UPIs but comes in handy for finding the segment id. Just authorise the tool to access your GA account and select the proper view. Go to the segment drop down box and select the segment you want. This will show the segment id which is in format gaid::-2. Use this inside the quotes for the segments attribute.

 

The basic query returned 770,000 rows of data. The others returned between 250,000 and 490,000 rows. After doing some cleaning and joining these together (using dplyers join functions) I ended up with a dataset of 450,000 rows. Each containing the amount of readers per article per day, information on category, author and publish date as well as amount of sessions and pageviews for the given day. All ready for the actual analysing of the data!

 

Supermetrics – Easy access to much data!

One nice and very handy tool for extracting data from various sources is an add-on to Google Sheets called Supermetrics. Using it you can access several different data sources, e.g. Google Analytics, Facebook Insights, Google AdWords, Twitter Ads, Instagram and many more. Once installed (and that’s super easy) it opens up as a side bar to your Sheet, like this:

supermetrics_sidebar

Then it’s more or less clicking the right options from the dropdown menus and you have a nice and handy report. Here’s some tips for using Google Analytics with Supermetrics:

1) Make sure that the account you are logged in to Google Sheets (and thus Supermetrics) also has access to the data you want to access.

2) Remember to have you the cell A1 selected before opening Supermetrics or your data will appear in some random corner of your spredsheet.

3) Pay attention when selecting the dates. If you plan to make a report that is auto-refreshing you need to choose the dates using the predefined intervals like today, yesterday, last week, last month, year to date etc. If you chose a custom interval, let’s say January 1st to Janyary 7th, the report will always show the result for these dates even though you ask it to refresh weekly.

4) Split by… rows and/or columns. This is the main benefit compared to querying Google Analytics directly. Here you can specify several dimensions for your data, in GA you only get two.

5) You don’t have to define any segments or filters. If you do, make sure that the account you’re logged in as also has access to these in Google Analytics (and that they are available for the view you are querying).

6) Under options make sure to tick both Add note to query results showing whether Google has used sampling and Try to avoid Google’s data sampling. You’ll see that many times Supermetrics is capable of supplying you with unsampled data where Google itself would give you sampled data.

Here’s a simple example, querying one of our sites for 2017 sessions, splitting the data by operating system and system version:

2017operatingsystems

Nothing spectacular, but very easy to use, easy to share. Absolutely one of my favourite tools!

 

Reading multiple csv files into R while skipping the first lines

Today I needed to read in multiple csv:s into R and merge them all to one table. I’ve done this before succesfully using a nice and short piece of code:

files <- list.files(".", full.names = T, recursive = T)
listcsv <- lapply(files, function(x) read.csv(paste0(x)))
data <- do.call("rbind", listcsv)

This code works nicely when you have csv:s that are of identical structure and don’t include any extra rows at the top of your data.

Today however, I’m stuck with csv:s with an extra seven lines at the top of each file that I’d like to strip.  Normally, skipping lines while reading in a csv is easy, just specify the argument skip. Like so:

file <- read.csv("filename", skip=7)

This would give me the file just nicely, as a data frame. However, the above code for reading in the files one by one into a list and the binding them together into one data frame doesn’t work as such as I need to get rid of the seven extra lines at the beginning of each file.

I could, of course, strip the seven lines from each file manually, I currently only have 13 files that I’d like to read in. But I’m sure that there will come a day when I have many more files, so why not do this properly right from the start?

After several trial-and-error approaches I reverted to Google. And found one nice Stackoverflow article on the subject.

I tried a couple of the options with no luck, always failing on some detail, like how to pass the arguments to read.csv. Eventually I tried this one:

data <- do.call(rbind, lapply
        (files, read.csv, as.is=T, skip = 7, header = FALSE))

And it works! In this format passing the extra arguments (as.is, skip and header*) to read.csv works fine. And with my amount of data (only 55000+ rows in the final data frame), it is also fast enough:

   user  system elapsed 
   0.38    0.00    0.37 

 

So now I’m all ready to start joining this data with some other data and get on with my data analysis!

 

* The as.is argument makes sure that strings are read in as character and not factors. The skip argument allows you to skip the specified amount of rows. The header  argument lets you specify whether the first row shouls be used as a header row or not.

Headache while trying to filter on a map in Tableau :/

This week’s MakeoverMonday delivered a data set on the accessibility of buildings in Singapore. For each building there is an index for the accessibility level and of course information on where this building is situated alongside with some information on that area (“subzone”). So I figured, why not plot each area on a map and then by clicking that area youl’d get a list of all the buildings in that area and their accessibility indeces? Seems straigth forward enough.

So I plotted the map, and let Tableau color the areas according to the average accessibility:

w50_singapore_averages.PNG

 

The darker the colour, the better the accessibility. Now I’d like the user to be able to click an area, for instance Alexandra Hill, and get the information about the buildings in this particular area. Like this:

w50_alexandrahill_table

But alas, this table is NOT shown when you click on the map, this action only shows one line per area, for some (for me) still unknown reason:

w50_alexandrahill_table_short

The entire list of buildings is shown only when you chose the area from a list on the side of the dashboard, but not when you click on the map. You can try it out on Tableau Public yourself.

I’ve tried different ways of filtering and different actions on the filters, but nada. I will, however, fix this! I want to understand why Tableau acts this way.  I just need to dig into it some more. So instead of serving you a nice #mmonday blog post, I shared some headache, but hey – this is not that uncommon when working with data after all 😉 Hang in there for the sequel!

 

Makeover Monday – Prices of curries

This week’s Makeover Monday was about visualising a data set gathered by Financial Times. The data covers the pricing of curries at the Wetherspoon pubs in the UK and Ireland. The original story is covering several different aspects of the pricing – my simple makeover is by no means any attempt to do it better. Rather it is an excercise for myself in using Tableau dashboards.

My makeover is posted at Tableu Public. It shows a map of the pubs and when you click on a pub a stacked bar showing the pricing for that bar appears on the right.

w49_curries

A simple viz, but a nice excercise in combining maps and charts into an interactive dashboard.

 

A new acquaintance – Google Data Studio

For the past few months we’ve been building dashboards with Google’s Data Studio. A visualiation tools that can easily be connected to a multitude of data sources. We have uploaded most of our data to Big Query to be able to easily (and with much better speed!) query the data into a multitude of dashboards.

BQ in combination with Google’s Data Studio is an easy-to-use combination to implement basic dashboards needed in a media house. Here are some examples of dashboards that we’ve built the past months:

  • A live report on the NPS for our site, including open ended comments, shown on a screen at the news desk
  • A dashboard showing which articles generate the most registrations
  • Amounts of sold subscriptions per type, date and per area
  • A vis on the demographics of the registered users (showing demo data):

Registered_demog

Data Studio is very easy to use and set up to work with different data sources. You don’t even need to do any coding to access the data in Big Query, but then again, the options on how to plot your data are limited. What you gain on the swings you lose on the roundabouts…

The plot types are quite basic, simple time series, bar charts, pie charts, tables etc. One nice feature though is the geo map that allows you to visualise your data on a map:

Subs_geo

But us non-US users still will have to wait for the zoom level to have other options than just the country for areas outside the US :/

Formatting your visualisation can, however, by no means be compared to e.g. Tableau or even Power Point. Limited options for formatting margins etc. mean that effective use of space on your dashboard is difficult. And you can forget about formatting any of the nitty gritty details on your chart.

Nevertheless, Data Studio makes it really easy to visualise your data and is a handy tool with a low learning curve. And it’s free. So why not try it out? And I’d love to hear your comments on it, so please pitch in in the comment section!