About · Posts · Categories · Projects · Mailing List

What does it mean to be a data-driven Product Manager?

SQL queries? VLOOKUPs? Definitely. Add to that: metrics, data, KPIs. These terms have become commonplace at technology companies. If you're interviewing for a Product Manager role in 2019, I guarantee you'll be asked some of these questions about your past experience:

  • What were your KPIs? Why did you pick those?
  • Talk about a time when you used data to make a decision.
  • What metrics do you use to illustrate if a feature is successful or not?
  • When is it not appropriate to use data to make a decision?
  • Sketch out your data model.
  • Talk about a time when the data suggested you should go in a different direction from your strategy.

You'll need to succinctly explain what you measured, how you measured it, and most importantly why you measured it. You should demonstrate the ability to hypothesize and associate metric(s) to evaluate.

Demonstrate an ability to focus. "Out of the 10 things I could have measured these three were most important". Focus may be the biggest value a Product Manager can offer. The ability to say these are the few things we should measure and why. And then have the awareness to know when those metrics have served their purpose and it's time to measure something else.

Demonstrate the ability to question. Was there a time when the data misled you? How did you adapt? What was your goal and why was monitoring metrics part of the solution? A concerning answer for why you measured a certain metric/KPI is "we've always done it this way". Even if you do resort to status-quo industry standard measurements, explain the reason for that. It will demonstrate that you at some point questioned the status-quo, and received a sufficient answer that resulted in you maintaining it.

Some practical examples.

Today, tech companies are vying for your attention. YouTube prefers you watch their videos instead of Netflix's, or going to the movies, or reading a book. They want your time allocated to YouTube. This is why when you finish a video the next one is already queued up and a long list of tantalizing recommended videos is in clear view.

The way to measure attention is through Retention & Engagement.

Retention, getting you to come back (e.g. open YouTube X times per month). Engagement, getting you to use the product (e.g. watch 10 videos per day on YouTube).

The gold standard Retention measurement is "N-Day Retention". The goal with measuring Retention is to understand who and how often is coming back to your product. Amplitude, a tool I currently use has a great overview of measuring N-Day Retention.

Engagement is about measuring who and how often is performing the "key action" in your app. In YouTube's case one of those actions may be "watch video". The gold standard Engagement measures are: DAU ("dow"), WAU ("wow"), MAU ("mm-ow") DAU/MAU ("dow-mm-ow"). These metrics measure: Daily Active Users, Weekly Active Users, Monthly Active Users. These are the unique number of people that perform the key action (such as "watch video") on a daily,  weekly and monthly basis.

DAU/MAU will demonstrate how engaged your user base is by reflecting the % of monthly active users that come back everyday. This can also be a measure of your apps "stickiness". Again Amplitude has great overview of this concept. It's also worth noting that although DAU/MAU is an industry standard metric for Engagement, it has shortcomings.

If your focus is Engagement & Retention, DAU, WAU, MAU, and DAU/MAU are great pulse metrics. Define a company-wide standard to an active user. Be very specific. For example an active user is an account holder that watches at least 10 seconds of video in 24 hour period. Then measure them consistently. They will help track if you're product is improving over time, and signal if things are getting better or worse.

You will also need to come up with metrics that are more specialized to your product and goals. What metrics are you going to try to improve that will directly impact DAU, WAU, MAU?

Here is a famous example from the early days of Facebook. When Facebook opened up beyond colleges, they entered hyper user acquisition and retention mode. Facebook's growth team united around the following insight: 7 friends in 10 days. The team discovered that users that added 7 friends within 10 days of creating a Facebook account were likely to remain an active Facebook user. Therefore all of their focus (features, experiments, design decisions) became around getting as many users as they could into the "7 friends in 10 days" cohort. To maximize this metric, which became their key measure for engagement and retention.

As a Product Manager you're constantly asking questions. What is the problem that needs to be solved? Why are we solving this problem? Who are we solving this problem for? What would these users do if our solution did not exist? What can we do to solve the problem? How do we prioritize multiple solutions? Which one do we do first? How do we know that our solution will solve the problem? Do users understand our solution? How do we measure if it's working? How do we know that we have the right metrics to measure if it's working?

Maybe one of the biggest challenges of being a Product Manager is figuring out which questions to ask and when to stop. If you're doing user interviews, how many people should you interview? What are their demographics? What are you seeking to learn from them? How many questions should you ask? What types of questions should you ask? What order should they be asked in? How should you collect the data? How should you present the data? How should you interpret it? What should you do with it? Each question's answer may result in additional questions, and each of those questions may result in different outcomes for your user interviews.

And why did you settle on doing user interviews? Did you do them because everyone says you need to talk to users? Or was it because your quantitative data was giving you mixed signals and you have specific questions that you'd like answered from a qualitative approach? What was your goal? Why was that your goal?

Quantitative data can be most vexing in regards to questions. How do you know you're looking at the right metrics? What could skew them? What if the data you have is "seasonal" and you don't realize it. What if the downward trend you're seeing is caused by a bug in your app that prevents data from being sent to your analytics system? Should you be measuring this or that? Are you measuring this because it's for your investors? Or did you read a blog post that said you should be measuring this? Should you measure more or less? If you have data for 100 users, is that enough to make conclusions? Or do you need 1,000 users, 10,000?

The more questions you ask, the greater the risk for decision paralysis. And yet if you don't ask the right questions, you risk focusing on something users don't care about. And so maybe you need to ask more questions in order to increase your chances to ask the right questions. Yet then you're asking more questions and spending more time answering them instead of acting on them. And just when you think you have enough information to act, you have one more question. And just one more after that.

That's enough right?

Looking back at 2018 after one day into 2019, one word comes to mind, tumultuous. I started a new job, read more books in a year than ever before, and got back into writing and recording original music. Picked up some new life lessons that I'll be trying to implement in 2019. We'll see how that goes.

2018 included some Nordic highlights. I finally got to live the dream of seeing Finnish rock vocalist (ex Nightwish singer) Tarja Turunen sing a medley of Nightwish songs. Her concert was one of the best live shows I saw in 2018. Thanks to a friend I was also introduced to the Finnish rock group Sturm und Drang. Although the band is unfortunately no longer together, their 2012 album "Graduation Day" is one of the best rock albums I've heard in a long time (here is one the singles from the album). I also found out from 23andme that I'm 2.4% Finnish! Hence my affinity for everything Nordic.

As an avid podcast listener I've added a new section to this year-in-review, "Current Podcast Subscriptions". It's a snapshot of the podcasts I'm subscribed to right now. Curious to see how this list evolves in years to come.

I wanted to also surface this wonderful Scott Forstall interview from 2017. Scott was the software leader for the first iPhone, and this was his first public interview since he had left Apple. The interview is inspirational and includes some fantastic lessons and stories.

 

 Books

Posts

Movies & Documentaries

Albums

Current Podcast Subscriptions

  • a16z
  • Jocko Podcast
  • Radiolab
  • Recode Decode, hosted by Kara Swisher
  • The Ben Shapiro Show
  • The Bill Simmons Podcast
  • The Daily
  • The Joe Rogan Experience
  • The Kevin Rose Show
  • The Lowe Post
  • The Tim Ferriss Show
  • This Week in Startups

wait just start Posts

Biohacking, while still underground, is steadily making it's way to the mainstream:


I describe Biohacking as utilizing technology to produce data that you use to make lifestyle decisions in order to optimize your health. For example, you may have done a 23andme genetic test that indicated based on your genetics, you are likely to drink slightly less caffeine than the average person. Thus you now make the lifestyle decision to no longer have afternoon coffee so the caffeine doesn't impact your sleep. Congratulations, you're a biohacker.

I've recently acquired an OURA ring. It's made by a startup in Finland and is marketed to be the "most accurate sleep and activity tracker". This past week I was puzzled in that although I was getting 7 hours of sleep, I was still feeling tired the next day. Here is what my OURA ring showed for Thursday night:

Thursday

You'll notice that my deep sleep was quite low. According to 23andme, my genetic profile makes me "less likely to be a deep sleeper", so I'm already starting at a disadvantage. Thus I need to optimize both my lifestyle and sleep environment in order to maximize the amount of deep sleep I get.

The ideal sleep environment for the average person has two obvious traits: quiet and dark. What may not be as obvious is the environment needs to be cool. Research has shown that humans sleep better in cooler environments. So for me, when any of these three elements are not ideal, my deep sleep suffers. There are many other lifestyle factors that can impact deep sleep (large meal before eating, too much screen time just before sleep, etc.), but the foundation is: quiet, dark, and cool.

In the winter season in New York City my apartment (whose heat I do not control) gets warm. Leaving a window open when it's 30 degree outside results in a freezing apartment. So I have two options for sleep environments: Siberia or Cancun. I've opted for Cancun by keeping my windows closed and that's been impacting my deep sleep. So I attempted a Biohacking solution.

Using a ChiliPad I was able to cool my bed to a brisk 62 degrees F. And I received instant gratification:

Saturday

Both my REM and deep sleep improved and OURA now gave me an 88% sleep score. Clearly I still can do better across the board, but one change already has clear benefits.

As mentioned there are many other factors that influence sleep, but in my example I started to address a foundational one. And that is the essence of Biohacking. Like a technology hacker, you collect data, analyze it, identify the metric(s) you want to move, implement a solution, measure, optimize, repeat.

There are certain skills citizens should develop in school and nurture throughout their careers. Skills such as empathy, grit, wonder, and growth-mindset. These are often referred to as non-cognitive, twenty-first century, or intangible skills. They can empower an individual to live a fulfilling and prosperous life.

I've recently started to believe that dealing with ambiguity is another critical non-cognitive skill. This skill requires an individual to be comfortable with committing to an answer when there is no right answer. To be able to take in multiple inputs (perspectives, facts, etc.) and make a decision. To not only strive to make the best decision that can be made, but to be aware of the impact that decision may have. And to use that awareness of potential impact(s) to make an even more optimal decision.

Learning to deal with ambiguity is the antithesis of a multiple-choice test. The latter has one correct answer. Many situations in the real-world have no clear right answer. For example as new technologies emerge, regulations for those technologies can rarely be organized into a A, B, C, or D answer. The answer is there is no right answer. And we will need citizens that are capable of making decisions that are ethical and value driven. Decisions that if audited, show that the citizen made the best decision they could given the information available.

While certain questions and decisions will have narrow consequences, others will be far-reaching. In the article "Tech Giants Join Forces to Score AI Chips", the author describes how tech companies need to align around a benchmark for measuring how well computer chips perform artificial intelligence tasks:

While esoteric, the process of devising benchmarks can be surprisingly contentious, involving fierce technical battles and corporate politics. Participants are often the same companies that have heavy stakes in the results of the tests—namely, chip makers and cloud computing providers who use the scores to publicly boast about the advantages of their products and services. It is a bit like inviting students to craft the questions for an exam they’re about to take. 

If you're the mediator, or a representative of a company building such chips, how do you navigate this situation? How do you deal with the ambiguity? How do you approach understanding the perspectives and goals of the other parties? How do you take a stance and recognize when it's more productive to shift your stance even if it comes with a negative cost to you.

In a white paper by Senator Mark Warner, he describes potential policies for regulating social media and tech firms. In one of the sections he talks about "dark patterns":

Dark patterns are user interfaces that have been intentionally designed to sway (or trick) users towards taking actions they would otherwise not take under effective, informed consent. Often, these interfaces exploit the power of defaults - framing a user choice as agreeing with a skewed default option (which benefits the service provider) and minimizing alternative options available to the user.

One drawback of codifying this prohibition in statute is that the law may be slow to address novel forms of these practices not anticipated by drafters.

He gives the example of Facebook asking users to provide access to their address book, and not giving the user a clear YES / NO option. The product design skews the passive user to selecting the YES option.

And thus how do you regulate this? As a free product that users have the choice to not use, should Facebook be regulated in how they build their product? How will such regulation impact other companies ability to innovate? How would you introduce regulation for something that seems clear today, but may become ambiguous in the future? Although your stance may make sense today, tomorrow a new company or technology may break it. Would you be able to adapt? As Mark Warner writes, can a law be written in such a way that it anticipates future problem areas?

As new technologies emerge the level of ambiguity around how those technologies impact our societal infrastructure increases. And as these technologies impact a majority of citizens, the decisions made around these ambiguous questions will have a far-reaching impact. And thus I hope that the citizens in positions to answer ambiguous questions are comfortable and confident in dealing with ambiguity.