Indiana Super Bowl and the Temple of Data
A commentator writing in a recent edition of Marketing described data analysts as the new ‘high priests’ of the marketing world, able to conjure up otherwise unfathomable insights and measurements from the vast oceans of data on which the industry now floats.
Far be it from me to dispel such perceptions (!) but the reality, of course, is still somewhat different.
Having large volumes of real-time data at your fingertips does not necessarily lead to the generation of major insights about the likely needs or intentions of the decision-makers. This is especially true in the analysis of social media data, where much of the information shared may have no value at all.
For example, the June 2013 edition of the Journal of Digital and Direct Marketing Practice contained an article on the use of social media tracking data in helping to manage the Indiana Super Bowl. The purpose was to publicise the potential benefits of using such data in marketing analytics.
What was striking about the article, however, was not the specific results but the lack of structure used to derive insights from the data. In many respects a case study in how opportunities in data can be wasted if there is no pre-planning.
At the sports event data was collected from a small range of social media feeds (Facebook; Twitter; and some relevant blogs). The result was a collection of 71,063 posts. Interesting enough, but hardly a deluge (and not what one would call “Big Data” – looks more like “Tiny Data” to me – but I digress).
The data was analysed in real time every day using a number of automated (‘word cloud’) and manual processes throughout the period of the event, beginning in the weeks leading up to the event and the weeks that followed. Posts were classified as either positive, negative or neutral and within each of these topics were identified that were fuelling the positive and negative comments.
The authors of the study say that the organisers used these “insights” to derive real-time assessments of the efficacy of the marketing campaign for the event in addition to mitigating threats to safety and react to PR opportunities & weaknesses. No specific examples of actions taken in response to having this information were able to be given by the authors however. What they do describe is how the flow of information was centred on 3 key themes:
– Hospitality
– Accommodation
– The game itself
and within each of these on a range of sub-themes such as:
– The friendliness of the cityfolk
– The city’s entertainment facilities
– Views on hotel prices and standards of accommodation
– Parking facilities
– Over-crowding
and so on.
The derivation of these common themes was, of course, easier to do when all 71,000 posts were compiled. The question is, how much data is needed before you find something that is ‘out of the ordinary’ and a true insight that means you can change something? There are no examples quoted.
Our view of the results from this exercise, which is held to be a ‘good example’, is that what it could well be a memo issued by Monty Python’s “Ministry of the Bleedin’ Obvious” and that few, if any, useful pieces of information were obtained.
The only useful information being fed back to those policing the event was that the city and the area around the stadium were becoming over-crowded on match day. One assumes the police may have guessed that might happen. Moreover, the police commanders on the ground could presumably use their eyes to see that over-crowding was occurring – unless of course they did not believe it until they saw that it has been reported on Twitter (it used to be CNN but that was in the old days).
All-in-all we could not see any evidence that the analysis of the social media data added anything to either the execution of the marketing campaign or to the way the event was organised and run. That is not to say it could never have any benefits – just that this particular piece does not show any.
What it shows to us instead is a missed opportunity.
Judging the success of the marketing campaign could have been monitored more effectively if a set of target topics had been established at the outset – such as the five dimensions we use to evaluate marketing effectiveness. Analysing the social media feeds along these 5 dimensions would have given much more insightful outcomes – and ones that could then be benchmarked.
Similar sets of ‘target topics’ could have been set-up and activities measured against them. This is not rocket science – all you need are some people who know about organising the event.
So in this case we feel that the Holy Grail fell into the fissure and was just beyond reach. If only Indy had been there (or maybe the Monty Python team).