I have always maintained that a data-driven culture has an important place at the table.
In my time in a workplace, I’ve been incredibly fortunate to have so many quants and qual’s work within all of my organization in marketing, engineering – I’ve even had analysts and data scientists reporting directly to me.
All this to say I LOVE data. I have always loved data. I think I will continue to love data until the day I die. Data has a magical quality that leads intelligent decisions to be made based on fact.
That being said, I’ve never been this deep into data myself. I’ve always been the leader of an organization who doesn’t just consume and transform data, but also crafts intelligence and insights from that data. I’ve – fortunately enough – been able to surround myself with a whole host of professionals and guides who can help me understand, analyze, and continue to learn about data and how it affects our industry.
My chief navigator in this (sometimes confusing) space is Ken Burbary, who (quite literally) wrote two books on the topic. From day one, his mantra to me has always been, “Data in aggregate is noise.”
I’ll be honest, I never really understood that.
Like I said, I’ve always been a strong proponent of ignoring vanity metrics and have been a full-throated exposer of truths for companies that are misinterpreting or manipulating their numbers to take advantages of winery budgets.
I’m the CEO of a data analytics and consumer insights company now, and you know what? It’s painfully obvious to me at this point that data in aggregate really is just noise. More than that, large numbers can easily become smaller when you segment them to not just one dimension, but by at least two dimensions, down to the size needed for relevant decisions or action. Through this series of lenses, the full truth about data is revealed.
What else does this mean?
No longer can we take it when companies start spouting large stats, especially in the aggregate, and want them to count for anything but blustering. We need to start asking the right questions to find relevancy within any data set.
For example, consider what happened in 2015 when everyone was raving about the growth and engagement that Snapchat could bring. You know Snapchat – the cute little ghost logo, pictures disappearing after 15 minutes, hugely popular in the teens to late 20s demographic. In June, the monthly average user number was approaching 100 million people – a sparkling example of usage and adoption. Almost every single winery I worked with was asking me how we should start implementing Snapchat for their business.
What was my answer?
Just because there was a huge number of users, a majority of those users were under 21 — do you see how for a winery that would make the platform basically useless? Besides that, it was risky to do in general (remember Joe Camel, anyone?).
Here’s another relevant example.
We’ve been combing the world for interesting, long-term data partnerships. We’ve met with a seemingly large POS company who claimed to have over 15,000 installations. It sounded like a dream, right? A tasty data source that could do wonders for us. But as Ken, my resident expert, dug a little deeper into the actual numbers resented, we say that over 6,000 of the installs were international and the other 9,000 were spread across the US. Further, within that 9,000, only 25 percent had organizations that even sold wine. And then, of those 2,250, only about 50 percent had sold wine that was over $15.
When all was said and done, there were only 1,125 installations remaining, and even in those, the format of the types of establishments using that POS was spread across from delis to bodegas in large cities to boutique wine stores. In Illinois alone, the footprint was only 5 stores in each category in the state. How could we possibly analyze the trends in Illinois based on this? There isn’t sufficient volume of relevant data in a scenario like this to be meaningful to brands, at least not enough to consistently and reliable base decisions on.
See how that data, initially in the grand scheme was thought to mean one thing, but when you take it out of the aggregate, when you de-noise, it says exactly what it’s supposed to? We passed on the partnership (for now) but used that as a lesson for analyzing data.
Data is finally (finally!) becoming a relevant tool in the wine industry.
For the first time in our industry, we are seeing tools that are collecting enough data to be “big” and the technologies and tools to collect and transform data are skyrocketing to allow us to turn data into intelligence.
There will be a ton of companies claiming to have large pools of data while this trend picks up, but they’ll use aggregate amounts and vanity metrics to represent their dominance instead of truly picking apart and analyzing the real data.
However, as we embark into this brave, new world we should all be asking the deeper data questions of context and relevance.
Only then will we truly cut through the noise and find the truth we need, relevant data.