AMR outliers or not?

I’m working on a data set with AMR for audio. AMR = Average Minute Rating, in essence how many listeners you content has had on average, each minute. You can think of it as a measure of your audience being spread out evenly over the content, from start to beginning.

To be able to calculate your AMR you need to know the total amount of minutes people have listened to your content and then of course the length of the content. So if your audio content is ten minutes long and your analytics tells you that you have a total of 43 minutes of listening, that would give you an AMR of 4.3 (=on average 4.3 persons listened to the content for the entire duration of it).

My assupmtion is, at least when it comes to well established audio content, like pods running for tens of episodes, that the AMR is more or less the same for each episode. Or at least within the same ball park.

However, at times your data might contain odd numbers. Way to small or way too big numbers. So are these outliers or should you believe that there actually was that few/many listeners at that particular time? Well, there’s no easy answer to that. You need to do some exploratory analysis and have a thorough look at your data.

First, especially if you run into this kind of data often, I would establish some kind of rule-of-thumb as to what is a normal variation in AMR in your case. For some content variation might be small, and thus even smaller deviations from the “normal” should be singled out for further analysis. In other cases the AMR varies a lot, and then you should be more tolerant.

Then, after identifying the potential outliers, you need to start playing detective. Can you find any explanation as to why the AMR is exceptionnally high or low? What date did you publish the content? Was it holidays when your audience had more time than usual to listen to the content or did some special event occur that day, that drew people away from it? Again, there is no one rule to apply, you need to judge for yourself.

Another thing to consider is the content: Was the topic especially interesting/boring? Did you have a celebrity as a guest on your pod/did you not have one (if you usually do)? Was the episode much longer/shorter than normally? Was it published within the same cycle, like day of week/month as you usually do? Did you have technical difficulties recording that affects the quality? And so on, and so son…

It all boils down to knowing your content, examining it from as many different perspectives as possible, and then make a qualified judgement as to whether or not the AMR is to be considered an outlier or not. Only then can you decide which values to filter out and which not.

When you are done with this, you can finally start the analysis of the data. As always, cleaning the data takes 80% of your time and the actual analysis 20% – or was it 90%-10…?

 

P.S. Sometimes it helps to visualise – but not always:

Failed linegraph of AMRs
Epic fail: Trying to plot a line graph of my AMRs using ggplot2. Well, it didn’t turn out quite as expected ūüėÄ

 

 

Advertisements

Funny vizzes

Every now and then your visualisation tool might be a little too clever. And it suggests some nice viz based on your data but the viz makes absolutely no sence. Like the one below. The credits go to Google Sheets this time. I had a simple dataset, just two columns of simple integers that I wanted to plot in a line chart. Actually, I’ve plotted seven of them already today. But come number eight, Google Sheets decides it is not an appriopriate viz anymore. So it drew this for me:

Not much information in that one ūüėÄ Perhaps this was Googles way of telling me to take a break?

I just thought I’d share it with you since we all need a good laugh every now and then! And I just might share some other funny vizzes as they come along. Please comment and share your similar vizzes, I’m sure you have a bunch of them as well!

Switching your Tableau accounts

As much as I love Tableau, their website(s) can be a bit confusing at times. Surfing around on them feels that you’re required to log in multiple times during one session. This is of course due to the site actually being many sites and you can have multiple identities on them, which might make things a little confusing…

As I’m about to change employer I wanted to make sure that my Tableau identity follows me along. Not that I have that much content on the Tableau site(s), but still. So I set about changing the emails.

The one’s I’m interested in “keeping” are the account on Tableau Community and the one on Tableau Public.

First, the Tableau Public account: Login to Tableu Public (note that you might have a separate password for this one, as they are NOT the same accounts!) and make the changes in the settings section. Again, you’ll need to verify the email via a confirmation email.

Then, the Tableau Community account: Log in – no, SIGN in, on the page http://www.tableau.com and make the necessary changes in the the “Edit account” menu. Make sure to verify the email via the confirmation email sent to the updates email address. You can find the instructions here.

So far so good. Except for the fact that changing your email on the community account also affects the account you have on your customer portal :/ So currently I can access my company account logging in with my private email… And apparently, if your customer portal account is deleted, so is your community account! This behaviour/dilemma doesn’t really seem to be recognised by Tableau. I’ve been in contact with both their Tech Support and their Customer Service, but neither has yet been able to help me. Let’s hope this can be resolved, as I am sure I am not the only one who wants to keep the community identity when changing employer.

The coolest thing about data

Perhaps the really really coolest thing about data is when it starts talking to you. Well, not literally, but as a figure of speech. When you’ve been working on a set of raw data, spent hours cleaning it, twisting it around and getting to know it. Tried some things, not found anything, tried something else. And then suddenly it’s there. The story the data wants to tell. It’s fascinating and I know that I, at least, can get very excited about unraveling the secrets of the data at hand.

And it really doesn’t need to be that much analysis behind it either, sometimes it’s just plain simple data that you haven’t looked at like that before. Like this past week when we’ve had both the icehockey world championships and the Eurovision Song Contest going on. Both of them events that are covered by our newspaper and both of them with potential to attract lots of readers. Which they have done. But the thing that has surprised me this week is how different the two audiences behave. Where the ESC-fans find our articles on social media and end up on our site mainly via Facebook, the hockey fans come directly to our site. This is very interesting and definitely needs to be looked into more in depth. It raises a million questions, the first and foremost: How have I not seen this before? Is this the normal behaviour of these two groups of readers? Why do they behave like this? And how can we leverage on this information?

Most of the times, however, the exciting feeling of a discovery and of data really talking to you, happens when you have a more complex analysis at hand. When you really start seeing patterns emerge from the data and feel the connection between the data and your daily business activities.¬† I’m currently working on a bigger analysis of our online readers that I’m sure will reveal it’s inner self¬† given some more time. Already I’ve found some interesting things, like a large group of people never visiting the front page. And by never, I really do mean never, not “a few times” or “seldom”, I truly mean never. But more on that later, after I finish with the analysis. (I know, I too hate these teasers – I’m sorry.)

I hope your data is speaking to you too, because that really is the coolest thing! :nerd_face:

Be careful when copying Supermetrics files!

Even though Supermetrics is a very easy to use tool, I every now and then run into trouble using it. Admittedly, this probably should be attributed to my way of working rather than to the software itself ūüėČ

Just last week I noticed that a couple of my reports weren’t emailing as scheduled. I couldn’t figure out what was wrong as everything looked allright, except for the emailing. So I filed a ticket and got help in just a few hours (Thank you Supermetrics for the fast response!) and got the emailing working again.

The thing was that I had the same QueryID for two different queries on different Google Sheets. As one had refreshed and emailed, the other could not do that any more as we use Supermetrics Pro and not Super Pro. Or, actually, it did refresh but it didn’t email. And having the same trigger time for both reports, according to Supermetrics’ support “…¬†it may be random everyday which one actually sends, depending on who gets in the processing queue first.”

Luckily the fix is easy, just delete the QueryID on the sheet called¬†SupermetricsQueries and refresh the query manually. A new QueryID is assigned to your query and you’re good to go.

Screenshot from 2018-04-15 16-17-55

So, how did I end up with the same QueryID on two reports? Easy. I had copied the entire report using the¬†Make a copy -option in the¬†File-menu. Which, in hindsight, obviously also copies the QueryID. But this I didn’t think about at the time. Actually, I’m quite surprised this hasn’t happened to me before.

So my advice to you is twofold: Mind your QueryID’s when copying queries and/or files. And if you have many reports to jiggle (I have approx. 200 automated reports, some of them with multiple queries) it might be worth considering to keep track of the QueryID’s.

I decided to add the QueryID:s to my masterlog of all reports I maintain. And then did add a conditional formatting rule to the area where I store the QueryID:s. This way I’ll automatically be alerted about duplicate QueryID:s across my reports.

 

 

 

 

 

 

Configuration error in Data Studio

Suddenly, one day, several of the dashboards I had created in Data Studio crashed. They only showed a grey are with the not so encouraging information about a configuration error:

config_error1

Normally I encounter this when the google account I use for creating the dashboard has been logged out for some reason. But this was not the case this time. So I followed the instructions…

Clicking on See Details the told me that the problem had something to do with the connection to the data. Alas, contacting the data source owner would not be of any help as the data source owner happens to be yours truly, and I was sure that I hadn’t made any changes to the data source.

config_error2

At this point I was starting to become a little bit alarmed. What could have happened to the data source?

I decided to open the data source (from the pen-like icon next to the name of the data source):

config_error3

This then in turn opened a slightly more informative, and certainly more encouraging, dialogue box:

config_error4

Interestingly enough, I had not made any changes to the data source. The data source being Google Big Query and the owner of the data being this very same account since the beginning of this setup. I cannot really imagine what had caused this hickup in the connection, but it was indeed solved by “reconnecting” to the source. First clicking reconnect in the above dialogue box and then once again in the pane that opens:

config_error5

After this you click “Finished”:

config_error6

So in the end, all dashboards are now again up and running, although it was somewhat annoying having to go through all dashboards and “reconnect” to a data source I already am the owner of.

config_error7

 

Analysing the wording of the NPS question

NPS (Net Promoter Score) is a popular way to measure customer satisfaction. The NPS score is supposed to correlate with growth and as such of course appeals to management teams.

The idea is simple, you ask the customer how likely he or she is to recommend your product/service to others on a scale from 0 to 10. Then you calculate the score by subtracting the sum of zeros to sixes from the sum of nines and tens. If the score is positive it is supposed to indicate growth, if it is negative it is supposed to indicate decline.

My employer is a news company publishing newspapers and sites mainly in swedish (some finnish too). Therefore we mainly use the key question in swedish, i.e. Hur sannolikt skulle du rekommendera X till dina vänner? This wording, although an exact mach to the original (How likely is it that you would recommend X to a friend?) seems a little bit clumsy in swedish. We would prefer to use a more direct wording, i.e. Skulle du rekommentera X till dina vänner? which would translate into Would you recommend X to a friend? However, we were a bit hesitant to change the wordin without solid proof that it would not affect the answers.

So we decided to test it. We randomely asked our readers either the original key question or the modified one. The total amount of answers was 1521. Then, using R and the wilcox.test() function, I analysed the answers and could conclude that there is no difference in the results whichever way we are asking the question.

There is some criticism out there about using the NPS and I catch myself wondering every now and again if people are getting too used to the scale for it to be accurate any more. Also, here in Finland there is a small risk that people mix the scale with the scale 4-10 which is commonly used in schools and therefore apply their opinions to their years old impression about what is considered good and what is considered bad. I’d very much like to see some research about it.

Nevertheless, we are nowaday happily using the shorter version of the NPS key question. And have not found any reason why not to. Perhaps it could be altered in other languages too?