In The Know | Jake Dailey, Nielsen Lead Data Scientist

 

Today’s most successful companies are data-driven and winning strategies are defined by a company’s ability to extract actionable insights from their data.

To learn more about what separates leaders in data analytics from the rest, Karbo Com joined Jake Dailey, Lead Data Scientist at Nielsen, at the company’s San Francisco office to discuss the challenges and opportunities.

 

Let’s hear a little bit about the work you do at Nielsen.

I’m a data scientist and I work primarily on our digital products. When people think of Nielsen, they tend to think about our TV business: the TV ratings. These ratings give advertisers, content producers, and TV networks a meeting ground where they can agree on how much to value an ad spot. 

It’s the same on the digital side: web publishers have advertising space online and there are plenty of people willing to go buy online advertising. Nielsen provides these advertisers with verification that assures them that if they run a campaign on a certain website, that they will reach the people they were told they were going to reach, achieve the promised number of impressions, and that their money will be well spent.

Advertisers are asking questions like, “How do I know that the ad I’ve paid for is being seen by real people and not robots?” Or, “How do I know that our ad actually had an impact on my business?” The data that some publishers are designing now tries to address these questions.

It’s my job as a data scientist to design the measurement and to think about this from a statistical perspective. For example, what’s the best, most unbiased, objective way we can measure how people are viewing ads and what the population is doing when it comes to seeing advertising?

 

How have you seen successful companies approach data?

The most successful companies are really smart about how they quantify their performance. They understand what exactly it is that they need to measure impact. But then in order to inform that, you need to start by asking what are we going to do with this information?” Once you start asking that question, then you can get very specific about what kind of data would allow your company to make decisions and think critically about strategy. 

These successful companies are great at harnessing data to observe how they’re performing. The road to doing this can sometimes require a big tech change for companies, but once they have that in place it enables them to move so much faster because they can iterate rapidly and can think about how to adjust strategy on the fly based on the key metrics they’ve defined.

 

How can marketing teams use data to make better decisions?

Many marketing teams are trying to understand the relationship between consumer characteristics—maybe behavioral, like the websites they go to, or maybe demographic, like how old they are—and their actions. One way marketing teams can use data to make smarter business decisions is by employing machine learning and statistics to predict consumer behavior and analyze outcomes in their data.

 

What are some key questions marketers should be asking themselves about the data they’re collecting?

One key question is, ‘What are the gaps in what we can and what we can’t observe?’ That’s so important in digital advertising, but I think it’s still a big blind spot in the industry. If you haven’t appreciated where you might be missing information, then you’re likely to be led astray.

An example of this could be that you have a dataset that only covers a small subset of the people you’re actually interested in targeting, or it might be that you systematically have a hard time getting people to answer a survey.

 

How do you address those gaps at Nielsen?

At Nielsen, we have a representative panel that we’ve recruited from around the country so we can observe what behaviors look like within the population. 

The challenge with online advertising is that many advertisers can only target or measure people that already go to specific sites, search specific things, or buy their products. If I’m trying to understand if my ad worked and I’m only able to target the people that were already going to buy my product, how do I know that my ad made a meaningful impact on that decision? After all, if I’m only able to target those people, there’s a good chance that they would have ended up there even if I didn’t advertise to them.

So it turns out to be very hard for advertisers and publishers to come up with good measures of success in an inexpensive, scalable, and unbiased way. At Nielsen, bringing objective sensibility to the picture is an important part of the role we play with advertisers and publishers.

 

What are some common pitfalls to avoid?

Imagine a venn diagram with three circles: statistics, computer science, and business. Smart data science happens at the intersection of all three. What can be challenging is if you’re in an organization that doesn’t have people with skill sets spanning those areas. Not having the right experts on board is a pitfall to avoid.

On one hand, we now have tools that make it easy to take your data and see what’s going on in it, which has made data science and analytics much more accessible to people who may be approaching the field for the first time. The real challenge here is often when you try to draw a conclusion based on something that you’ve observed. There are a lot of factors that you have to take into account—and a lot of uncertainty you have to be honest about.

Your dashboard may be telling you that your campaign grew by a certain amount from this day to the next day among a targeted group. And you like to hear that. You make some decisions around ad spend based on that observed growth. But if you’re not being totally honest with yourself about the uncertainties then that data may be misleading. For example, did the average number of people seeing your campaign each day actually grow based on your strategy or is this just a normal uptick that doesn’t indicate a long-term, material improvement? This is the kind of tough question that data scientists and statisticians aim to answer.

It often comes back to having the right talent onboard: people that understand computer science, statistics, business and can help span that gap between what should be measured and the conclusions that can be drawn from that data.

 

At Karbo Com, we often generate compelling data by conducting surveys. What advice do you have for companies who want to design their own survey?

It’s incredibly important to understand who will respond to your survey. If you choose to do an online survey, you’ve got to ask yourself, ‘Do we have a way to measure what kind of people are answering our survey in the first place?’ And you need to determine if aligns with the people you actually want to measure. You might only get a certain demographic of person that’s actually willing to take a survey, and that can be a big problem for the conclusions you draw

Next you need to ensure that people taking your survey will be able to answer your questions truthfully. When it comes to the responses, you want to be very careful not to pigeonhole people into multiple choice responses that aren’t appropriate for them, while still giving yourself a way to analyze the data.

Sometimes we’d rather have an open ended response to whatever question we have to really understand what’s going on in the respondent’s mind without putting words in their mouth–this helps surface hypotheses to test through more surveys or experiments. However, in opting for open ended questions we accept that their responses are much harder to analyze and draw conclusions from than the quantitative measures we can create from other question types.

Another important thing to consider is the effect that incentives can have on the quality of results.  If you’re offering folks an incentive to take your survey, then it’s likely that a segment of those people are going to just blaze through and not really spend a lot of time on it. So you may need to find ways to account for that. 

Think about how you can get people to slow down and actually read the question. Maybe there’s a way to limit how fast they’re moving through the survey. And when you’re reviewing the data, take note of how fast they completed the survey. If someone sped through it and answered “A” for every question, you’ll need to question the validity of that data.

This interview has been edited and condensed for clarity

 

 

Want more “In The Know” expert insights? Check out Karbo Com’s interview with entrepreneur and investor Jay Adelson.

Share