This week I’ll be speaking at O’Reilly’s Strataconf in Santa Clara, both as part of the Data Driven Business Day on Tuesday, February 11 and as a keynote speaker on Wednesday, February 12. I’m excited to be part of the event, and thrilled to be working with Alistair Croll and his team as they put on what looks to be an event full of both big and useful ideas, and of some heavy-hitters when it comes to data and analytics.
So what’s a person like me, decidedly not a data scientist, doing there? What can I possibly have to tell a convening of many of the most data-oriented and data-savvy minds in technology and business?
Two things, really: numbers won’t count themselves, and people are data. Two lovely titles for talks, but what does it mean?
Those numbers won’t count themselves
I’m particularly excited about the Data Driven Business Day because I want to see how people define data-driven businesses. We are increasingly helping clients to ‘design for data’ - to anticipate and plan for better KPIs. We think whatever you choose to measure, it should help you make decisions.
I doubt very much that anyone really disagrees with that notion. But how we actually measure frequently tells another story.
Before starting The Difference Engine, we frequently encountered marketing clients who dealt in what the Lean Startup movement (specifically Eric Ries and Dave McClure) would call “vanity metrics” - metrics that make you look good, but don’t help you make decisions. These metrics might include a million likes on Facebook at one end of the spectrum, or “top 2 box on brand preference scores with competitor x” at another end.
We also frequently encountered brand managers who had very little data about their end users, their actual customers. Large consumer electronics brands often lack CRM systems that reach below the sales channel and so rely upon segmentation studies to guess about customers and prospects; networks and television shows mainly depend on Nielsen for viewer data but often don’t know who is really watching and why; B2B brands sometimes hold onto outdated beliefs about the motivations of small business owners or IT decision makers.
We used to belief this was simply because marketing is cut off from sales and operations, or that managers are tradition-bound to use tracking studies, segmentation, or attitudes & usage studies. There’s a lot that our clients do “because tradition”.
But as we work more with product managers, CEOs and even boards of directors, whether in established businesses or in startups, we find that the real tradition behind this dependency on 3rd party data sources is poor data design.
They don’t have direct data because they didn’t set themselves up to measure it. The consumer electronics company who does not sell direct to consumer can be (mostly) forgiven for not having a robust CRM system for analyzing end users, because they don’t directly transact with them (except over warranties or after-sales service).
But the digital marketing or product team is rarely similarly forgiven? Just the word “digital” seems to imply measurement - it’s numbers, after all. Everything can be counted now, so too often, digital marketers and product developers assume everything is being counted. But what’s worse is that they frequently aren’t sure what they should measure, how they should test, which customers they should care about, or what “conversion” even means for their business.
When you’re not sure what to count, you count likes. You count click-through rates. You count page views or time-on-site. You look at the numbers that are available. What gets counted is what counts, instead of the other way around.
Despite all the time marketers have spent measuring ad recall, brand like-ability, brand preference, intent to purchase, and despite all the effort spent haggling over reach, frequency and impressions, these numbers may not stand for anything meaningful to customers or their behavior. And what’s irrelevant to customer behavior is - most likely - irrelevant to business results.
People are data, too.
Businesses take comfort in numbers. Numbers seem dispassionate; numbers seem steady. The fickleness and fecklessness of people are scrubbed out of data. You can trust the data. The data will tell you what to do.
But that’s not really true. For most people, binders full of tabulated data will never translate, Neo-peering-into-the-Matrix-like, into action or intent. A survey will not “tell you what to do”. We need people to analyze the data, to interpret it, and to make recommendations based upon it.
Even more importantly, we need people who know how to measure the right things, who know how to design for better data. And to do that, we need to understand the underlying system of a market, a product or a customer segment. We need to not only know who people are by correlating their media habits with their brand preferences (a probably useful method of buying media); we need to know how people actually purchase products in the category, why they buy them, how they use them, and what else they use that we might see as ‘competition’. We need to know what people tend to do right before they cancel a service; we need to know what they tend to do right after they buy or sign-up or register or like or mention.
In short, quant research should never be undertaken in isolation. It should depend on a sound qualitative understanding of the world. Much like the natural philosophers-cum-scientists of the 19th century and before, we need great hypotheses to do great experiments.
When it comes to product strategy, we like to think of this as empathy building. We’re used to businesses taking an “if we build it they will come” approach to product and marketing development. When customers don’t, in fact, flock to the brand or the product, they try to optimize their existing behavior. This explains, at least in part, the rise of programmatic ad buying, analytics software, the popularity of dashboards and user surveys, and endless A/B testing. But where we think these tools fall short is in developing deep understanding and insight, in fostering empathy with customers, and in turn, helping managers develop better ‘instincts’ and ‘intuition’.
We’re all - marketers and product developers alike - trying to find the signal in the noise. Too often, we’re mistaking one for the other. But we don’t have to. We can get smarter about the underlying systems, measure better indicators, understand correlation and causation better. We can have more empathy for our customers. We can show better judgment, commit to our ideas, and make better decisions.
So how do we do it?
We believe in talking to people, but also in understanding the gaps between what people say they do and what they do. We believe in understanding the real life experience of a product or service or brand. So we watch, we listen, we investigate, we discover.
But these conversations are not enough. We help clients translate what we learn from real people into hypotheses we can test, and we collaborate with clients to tie these hypotheses to measurable actions. We set up clients with qualitative research that is part of the design process, not separate from it. We get the whole team involved in getting continuous customer feedback. We design research that helps us really understand and identify a problem, and we work with product and brand design teams to solve those problems. We help our clients develop a set of KPIs that are meant to drive action.
Based on understanding the business, the competitive environment, and the customer experience, we helped one of our clients make two key decisions: pay staff a little better to attract talent; give raises or bonuses to staff that get positive customer and management feedback. That’s it. It sounds simple, because it should be. And in our work with this client, we’d built enough trust in the process and the data that they made those decisions during our presentation to their board of directors.
We’re a product-strategy company. The strategy has to come from somewhere. We think it comes from empathy with customers and prospects, deep understanding of the underlying system that a business or brand lives in, great data design, and an eye for opportunity.
We say, let's be data pragmatists. Let's use data - in all its forms - to learn what to do next.
2014 is off to an exciting start. As last year wound down to a close, we made some very important decisions. Chief among them was welcoming Laila Forster to The Difference Engine. Her experience in operations, strategic client services, project management and integrated marketing is going to put some serious muscle behind our goal of "operationalizing strategy".
As part of the decision to bring Laila on was also a decision to clarify the focus and vision of The Difference Engine. While research is the vehicle for much of the work we do, this is not a market research company.
An insight-driven product strategy company.
Over time, we realized that, both by design and as an inevitable result of the relationships we have with our clients, we have three distinct work-streams.
Regardless of the work-stream, the types of questions our clients ask are often quite similar: Who are (or should be) our customers? How do we best reach and serve this new market? How can we grow?
While the questions sound deceptively simple, the answers are often quite complex. We believe the answers to these questions often lie in the ways that businesses create new value for customers.
In fact, we believe "creating new value" is a pretty good definition of innovation.
How are we different?
We love it when the ideas we help develop become real products, services and experiences.
We're not seers or soothsayers or trendspotters or futurists, and we're also not McKinsey-style consultants.
So we don't leave clients with a nice looking presentation full of half-baked concepts and highly stylized observations.
And we won't leave them with reams of graphs and charts to decipher.
We create tools for them to use, roadmaps to follow, KPIs to adopt, research-based resources to tap, and prototypes to build.
We design these assets to work within existing workflows and cultures.
We train clients on how to use them, how to adapt them, and how to bring others on board.
We set our clients up to succeed.
Create your own tools.
As a product-strategy company, we think it's important to understand exactly what it's like for our clients to develop a new product or service. And we like to muck about on the internet.
So, we're in the process of designing software that will serve as an important tool for our research practice. Once we've road-tested it on a few projects, we'll make it more widely available - first, to our clients, and then to others.
We've got a working prototype today, but we need to spend some time thinking about the user experience, the workflow, and essential design decisions. When we've got some screenshots to show you, we'll start posting them here.
In the meantime, if you'd like to read a bit more, feel free to poke around the site, or have a look at our latest credentials presentation.
I started The Difference Engine for one reason. After 10 years in brand consultancies that based their recommendations on qualitative and quantitative research, I came to believe that the typical mode of market research is broken. I'd like to help fix it.
When I worked in agencies, this image (insight mining, leading to 'nuggets' that get synthesized into 'insights' that somehow magically become great ideas) seemed to sum up the way we'd talk to clients about figuring out what to do with their marketing budgets.
But much of the time, budgets are set based on what was spent last year; quarterly spending is organized around known product announcements and/or seasonality in the category; and briefings to agencies happen sometime after a lot of debate and politicking, research and consulting. Oftentimes, project briefs are widely abstracted from business objectives. KPIs tend to measure the marketing results, not the business results. They're trying to make their ads have better recall, their brands gain greater stated and derived preference, their brand metrics lift.
In other words, the marketing team and their agencies are solving marketing problems, not business problems.
In the past few years, CMO tenure has doubled to nearly 4 years, driven by more accountability to, and involvement by, CEOs and COOs. This is a positive trend. It suggests that CEOs/COOs and CMOs are beginning to look beyond individual campaigns, taking a longer view. I certainly hope so. But it also suggests that perhaps we're getting back to basics about what marketing is.
The nucleus of marketing, in my opinion, is the interaction between the product/service and the end user. Understanding these interactions is critical to both disruptive innovation (spotting market need gaps and opportunities, anticipating threats from challenger brands and technologies, understanding small shifts in user behavior that could kill your category in a year or two), and to sustaining innovation (adapting to user needs, improving performance/service/experience, scaling, reducing cost/price).
Marketing could be a team that provides a critical conduit to innovation within an organization.
But market research today seems largely limited to two branches of business: the big strategy consultancies who may be able to assist with sustaining innovation; and the marketing research companies that provide research to develop and test messaging and imagery.
Both are useful and necessary, but they are inward\-oriented. They look at the business as it is today, or the marketing as it is today, and ask how to optimize efficiency, effectiveness and profitability.
But new opportunities are hard to spot when you're looking only at yourself, your most profitable customers, and your closest competitors. You have to branch out.
Watch & Learn
Most of my clients don't sell directly to their end users. They rely on others to do that. Right away we have two problems.
The first is that it changes my clients' definition of a 'customer' \- in fact, this is probably why we use the term 'consumer' to describe a probable end user. Their first order customer is a retailer or wholesaler. Their customer's customer is a 'consumer'. Fundamentally, most businesses are at arms' length from their consumers. This limits the ability to see what consumers are really doing with your products, how they're really using your services... You miss out on all those moments when a business' beliefs about the role its products play in the world clash with or combine with reality.
The second is what happens when you're at arms' length from your consumers \- you rely on your most profitable customers to tell you what they need. They need more of whatever is selling well today, whatever fits their current value chain.
How can the brand get closer with their consumers/end users? They can certainly do large surveys (and they do), or use large private panels for quant\-sized qualitative exercises (and they do), or do focus groups with their core consumer segments (and they do). But many times, these surveys are merely proxies for asking their biggest customers what they want. It tends to reinforce doing more of whatever you're doing today.
[By the way, this is not bad! You need your biggest, most profitable, most frequent consumers and customers. They pay for your products and services and deserve your respect and attention.]
But if you want to learn what to do next, who your next market will be and what they'll need, where the strategic threats are coming from, how technologies can transform your category... You might need to get off the beaten path, get personal, and watch as well as listen.
Behavior is a terrific source of inspiration. How people are hacking products to their own purposes, why people move on from a preferred brand to something else, when people would rather a small, half\-baked solution to a full\-featured one... These are the places where opportunities so often lie. And to meet these people we have to talk to your smaller consumer segments, or the segments that are large but neglected \- even people who might otherwise be called 'rejectors'.
Less Talk, More Frequently
There can be immense value in convening a group of consumers in a room for a few hours to talk about ideas and get feedback on a brand. This is the purpose of a focus group. Brands should do them.
There is a lot of utility in getting a sense check of a creative campaign in 45 minute in\-depth interviews with consumers. You can avoid (those over\-hyped bogeymen) Group Think and The Dominant Respondent. You can gain intimacy and trust, and detail. By all means, do them.
But when you're looking for new product or service or segment opportunities, you should consider a lighter\-weight approach to consumer research. Shorter, more focused conversations and contextual observation of behavior, repeated over time, can provide both the spark and the fuel to the innovation fire.
To do this well, we obviously can't also incur the costs of traditional research facilities. We can't afford two weeks' recruitment time for each conversation. And we can't be too precious about the mode of the interaction. Every conversation doesn't need to be the same. They can be phone calls, or Skype video calls, or visits at home, or meetings at the office. We can even invite them around to ours. Stimulus can be paper prototypes or working betas or finished products. Discussion guides should be much looser, allowing people to do what they would naturally rather than being led by the moderator.
This is a recipe for continuous feedback throughout the life of an innovation effort. Certainly I believe that innovation and R&D should be ongoing efforts; but they should also be focused on specific hypotheses to be tested.
Rather than separating UX testing from any other kind of research, I place a premium on understanding the user's experience from all angles. Certainly, which button they'll press and whether they understand the product, and how they navigate a retail environment or a website or a mobile app will matter a lot. But there's more to this than click maps and eye\-tracking (which, like all other quantitative types of research can tell you how much, how many, when, how often \- but not why).
I want my clients to feel more than familiarity with their consumers \- I want them to feel empathy for them, and to gain this through real intimacy. Taking away the delays, the one\-way mirror, the rigid 'protocols', allows us to keep learning and improving. The goal is to get to know your consumers so well you can anticipate their needs, rather than chasing after them.
Market v. Marketing
I'll keep writing more here about what The Difference Engine does and how it does it \- and why \- but for now, I'll close where I began.
I like to say we do market research, not marketing research. We don't test ads.
So what do we do, then? We help you identify market opportunities \- gaps unfilled by your competitors, opportunities to do things better, underserved consumer segments, changing consumer behavior, burgeoning competitive threats, and more.
I want to put people back at the center of the equation, right next to the product. It's this dynamic that should inform all the rest \- design, price, distribution, messaging, offers, and so on. I want my clients to know what people are doing and what this means for their business and product road maps \- not only what they're saying and what that'll mean for their ad campaigns.
I won't just diagnose and describe these things, I work with clients to decide what to do next.
And along the way, The Difference Engine is going to introduce methods and technologies that I think will fix some of what is broken about market research. Watch this space.