Daniel Mill is Director of Marketing Analytics at The New York Times, where he oversees measurement of acquisition and retention efforts as well as forecasting.
Tell us about your path to becoming a data analyst. How did you get involved with data?
I got my start like a lot of analysts, by getting a role at a boutique modeling shop which had just been bought out by Nielsen. I got into it in a bit of a roundabout way, though. While I had an economics degree, they were mostly interested in me because I had any database experience whatsoever, which was primarily from an internship at a factory my dad worked at. And my “database experience” was actually this ancient green-screen AS-400 based DB to manage warehouse inventory. It was somehow enough to get me a job, and once I was in it, it was a total crash-course in SQL and eventually econometrics.
What is a data project that has been particularly inspiring to you?
I helped pilot our model-based digital subscription forecasting at The Times, and after a series of pitches, the company agreed to let me take it over in late 2016. At the time we had a pretty predictable seasonal pattern to our subscriptions, and I came into it pretty confident in our accuracy. Shortly after producing my first forecast, Donald Trump came into the presidency, which is something I (and the rest of the forecasting community, it feels like) didn’t take into account. That first forecast ended up being nowhere near accurate as we saw huge reactions from the public stemming from the result. And since that election, even in these past years, we’ve seen nothing resembling the predictable pattern we once had. This was extremely humbling and helped inspire me to keep improving the process, trying new things, and not getting complacent.
The most common way I’ve seen data abused was in analyses where it’s clearly used to justify a predetermined position rather than create one.
What are some of the problems you are currently tackling at The New York Times?
One of the more interesting problems we’ve encountered is trying to strike the right balance between short term and long term acquisition strategies. A short term strategy being sales and ads urging users to subscribe now, a long term strategy being ads reminding people the positive that The Times brings to the world with the hope that it encourages users to come back to the site and check us out. There’s no tried-and-true playbook for analyzing and optimizing this balance, so we have to get creative and roll up our sleeves and start bridging different models and different dependent variables and trying to wade through the gray area while we test and learn. I think we’re lucky that our coworkers in other departments realize the complexity, and are patient, as opposed to trying to force premature answers.
Tell us some of your key metrics.
With millions of daily visitors to the site, we only convert a tiny fraction of the population. We have to prioritize our strategy around building and weighting user habituation that makes The Times a part of their daily habit. The majority of our traffic is from spurious visitation, so we want to find what encourages those users who come back on their own volition. Another key metric we have is breadth of sections, as users who explore the site are more likely to subscribe. So we’re not simply measuring how many people are coming to the site, but monitoring what quality the given traffic is.
Tell us about the attribution models you are using.
We’re running the whole gamut at understanding subscription drivers. We have an internal MMM (Market Mix Model) structure, a lite user attribution model, and a team built around running onsite experiments. It’s a lot to wade through, and definitely a challenge to weigh differing methodologies (which often overlap) so as to present a holistic picture of your company. But there doesn’t look like a way backward from here, where having blind spots are an acceptable solution.
Internally, we feel the correction isn’t to limit or govern access, rather try to train everyone across the company to have more data analytics chops.
Can you tell us how your data stack looks like?
The majority of data used for analysis is primarily structured and in GCP (Google Cloud Platform) which we write SQL against in BigQuery. From there, we have an interim layer for scheduling standard queries through a tool we call Bisque, and for more complex dependencies or production jobs we use Airflow. At this stage, data is communicated throughout The Times via Chartio dashboards as well as other custom applications. For the analysts, we connect to BigQuery primarily through Python or R. In the past, I’ve tried to force cohesion among my team by forcing everyone to use Python to keep code homogenous. Over time I’ve worn down and just let the analyst use whichever language they’re happiest in. As long as it’s not, like, COBOL.
What are some ways you keep ahead of the curve, education wise, when it comes to marketing analytics?
In terms of my own personal education? I guess the obvious answer is to read… a lot. Allen Downey’s Think Stats is a common reference to brush up on some basic fundamentals. John Foreman’s Data Smart is an awesome resource if I’m delving into a totally foreign concept. I actually love that it’s written using primarily Excel, as I feel it helps me understand concepts better from the ground up. My coworker Gordon Linoff’s book Data Analysis Using SQL and Excel is great for a Business Analyst, as it’s verbose on things that aren’t standard in most Data Science books such as Survival Modeling. (I promise that’s not a plug, I read it seven years before I met him).
As for other learning strategies other than reading, for me personally, it’s to just dive in. We have so many questions from so many different angles here at The Times that you should be able to pick a problem you find personally interesting that benefits the company financially. I try to keep 10 to 15 percent of my time dedicated to some sort of personal project for The Times.
Data measurement is and will continue to be a part of almost everyone’s job and we have to accept that.
In your opinion, is there a “right” or “wrong” way to use data in modern business? What are some pitfalls?
There’s definitely a wrong way. The most common way I’ve seen data abused was in analyses where it’s clearly used to justify a predetermined position rather than create one. Another thing I’d like to see more common in data science is implicit use of caveats or assumptions when model output is presented. There’s no such thing as a perfect analysis and these caveats are important for leveraging the output correctly.
What is your view on data analytics tools? What are some new innovations and what still remains a challenge from a user perspective?
As an analyst I am a bit purist on analysis tools. I like using a combination of SQL and Python almost exclusively in my data explorations, analysis, charting, etc. because I like the flexibility and the complete control.
But it would be insane to think this is a full-scale solution for everyone consuming our work or for every question someone may have. So we implement as many tools to connect to our databases for non-data analysts for self service analysis to free up our analysts time. This has, for the most part, been a success. But the penalty for democratizing data access is emboldening non-data analysts to misconstrue data, for example by mistaking spurious correlations as significant.
Internally, we feel the correction isn’t to limit or govern access, rather try to train everyone across the company to have more data analytics chops. Data measurement is and will continue to be a part of almost everyone’s job and we have to accept that.
If you were to have a completely different career, what would that be?
I always thought I’d be a lawyer. I have a lot of lawyers in my family and I always gravitated towards jobs where debate is a significant component. Luckily, data science and analytics almost always need to be defended, and I feel like I haven’t actually strayed far from my original ambition.