About · Posts · Categories · Projects · Mailing List

Curators. They help us figure out what to focus our attention on. Out of all of these songs, listen to these. Out of all of the data points, focus on these. Out of all of the books, read these.

They can be people. They can be algorithms (in this post I assume they are people). They are indispensible to those that seek to spend their time in a satisfying and productive way. They help us reach our goals.

Two traits are needed to become a curator. Experience and vision.

Experience comes from delving into the topic. Passionately exploring all facets. Experiencing the good and the bad. Curators have hit roadblocks and they have persevered. They have learned what works, and what doesn't. They wish they had known all of the things they know today back when they first got into the topic. The more they learn, the less they realize they know. The process of gaining experience is what empowers them to become a curator. It gives them the ability to start selecting the things we should focus on.

Experience is not enough. In addition they need to have a vision. The reason why they selected the subset of things we should focus on. Where are they leading us by selecting these things? What is the purpose of their curation?

Say you work for a technology company that is collecting a lot of user data related to your product. Data may include app reviews, Google analytics, and a variety of demographic data. Hundreds of data points. What do you do with all of it? What do you focus on? A data curator can provide a vision and tactics for what to do, what to ignore. Their industry experience paired with conviction empowers them to lead. To help us figure out what to focus on and why.

If you want to become a curator, gain experience and become an expert in the topic. Then develop a vision and have the conviction to tell where we should go and why.

While listening to the Jocko podcast I was introduced to MCDP 1-3, "Tactics". A publication from the U.S. Marine Corps about winning in combat. It's a philosophical publication that presents a way to think about the art and science of using tactics to achieve victory. Tactics include "achieving a decision", "gaining advantage", "being faster" and "adapting".

The publication is filled with blunt yet profound insight that can be applied beyond the battlefield. For example:

Consequences of a tactical engagement should lead to achieving operational and strategic goals.

If you're going to invest time to engage in a project, an activity, or a meeting among colleagues, don't do it just to do it. Have your goals top-of-mind. Why are you pursuing the activity? Without a clear objective the consequences may be lost time, or a frustrated colleague wondering why the meeting was scheduled. Yet it if the objective is clear, the consequences may be mitigated.

In the final chapter "Making It Happen", there is a discussion on how to deliver a "critique" after a training exercise:

The standard approach for conducting critiques should promote initiative. Since every tactical situation is unique and since no training situation can encompass more than a small fraction of the peculiarities of a real tactical situation, there can be no ideal or school solution. Critiques should focus on the students' rationale for doing what they did. What factors did a student consider, or not consider, in making an estimate of the situation? Were the decisions the student made consistent with this estimate? Were the actions ordered tactically sound? Did they have a reasonable chance of achieving success? How well were the orders communicated to subordinates? These questions should form the basis for critiques. The purpose is to broaden a leader's analytical powers, experience level, and base of knowledge, thereby increasing the student's creative ability to devise sound, innovative solutions to difficult problems.

Critiques should be open-minded and understanding, rather than rigid and harsh. Mistakes are essential to the learning process and should always be cast in a positive light. The focus should not be on whether a leader did well or poorly, but rather on the progress achieved in overall development. We must aim to provide the best climate to grow leaders. Damaging a leader's self-esteem, especially in public, therefore should be strictly avoided. A leader's self-confidence is the wellspring from which flows the willingness to assume responsibility and exercise initiative.

This "standard approach" is straightforward, yet practical and nuanced in it's objective of promoting initiative and helping the leader grow. The objective is to focus on a leader's "progress achieved in overall development".

Each company that I've worked for required companywide "employee reviews". I would fill out templates about what I worked on, and rate myself on a subjective scale. My managers and colleagues would do the same. The process was time-consuming and I rarely learned how to get better.

The critique approach presented in MCDP 1-3 isn't a step-by-step guide to delivering a critique. It's a mindset. It presents an objective, a way to think about achieving that objective, and some tactical questions for getting there. It's up to the company to take this approach and adapt it to their situation and needs.

I believe many organizations could benefit by reassessing their approach for conducting critiques, because the "standard approach for conducting critiques" is not so standard outside the US Marine Corps.

One of my guitar heroes is John Petrucci from the band Dream Theater. John is widely recognized as one of the best rock guitar players in the world. He's also composed some of my favorite guitar solos. One of them is in the song "Under A Glass Moon".

The minute long solo is very technical. Mastering it requires timing, flawless technique, and confidence. It's a complex solo that could take months for a seasoned guitar player to master. In order to perform it at John's level (to play it clean, in time, while being relaxed and confident in every note) requires a particular approach in learning it.

One approach to learning this solo is to learn the entire thing, and keep playing it over and over until you've mastered it. This approach will likely not yield the results you seek. By playing the entire solo you don't end up focusing on the specific sections that you may struggle with. Therefore those sections remain messy, and you may not master the entire solo.

John takes a different approach when teaching the solo. Here he talks about one section:

The next thing is to master a sweep, hammer on, pull off combination lick. We'll break into smaller pieces.

In a minute long solo this section lasts for about one second. And yet there is a lot happening in that one second. A lot of technique and nuance that needs attention in order to be well performed. Now if you're approach is playing the entire solo over and over, how much attention are you giving to this one second section? One second as you fly through it.

Instead John recommends isolating this one second, breaking it down to it's core components (getting the timing of the right and left hands, getting the fingering down) and keep play it until you've mastered it. Start slow, build up speed. Then after you've mastered it, continue to the next section.

This idea of breaking it down to smaller pieces has a much broader application. If you want to run a marathon, start by running a mile. If you want to be able to cook a multi-course dinner, start by making an entrée.

The tech world excels in this. Strong product teams seek to break down large problems into smaller pieces, and solving those pieces one at a time. For if you don't break things down, and just go straight into running the marathon, you likely wont get the results you seek.

What does it mean to be a data-driven Product Manager?

SQL queries? VLOOKUPs? Definitely. Add to that: metrics, data, KPIs. These terms have become commonplace at technology companies. If you're interviewing for a Product Manager role in 2019, I guarantee you'll be asked some of these questions about your past experience:

  • What were your KPIs? Why did you pick those?
  • Talk about a time when you used data to make a decision.
  • What metrics do you use to illustrate if a feature is successful or not?
  • When is it not appropriate to use data to make a decision?
  • Sketch out your data model.
  • Talk about a time when the data suggested you should go in a different direction from your strategy.

You'll need to succinctly explain what you measured, how you measured it, and most importantly why you measured it. You should demonstrate the ability to hypothesize and associate metric(s) to evaluate.

Demonstrate an ability to focus. "Out of the 10 things I could have measured these three were most important". Focus may be the biggest value a Product Manager can offer. The ability to say these are the few things we should measure and why. And then have the awareness to know when those metrics have served their purpose and it's time to measure something else.

Demonstrate the ability to question. Was there a time when the data misled you? How did you adapt? What was your goal and why was monitoring metrics part of the solution? A concerning answer for why you measured a certain metric/KPI is "we've always done it this way". Even if you do resort to status-quo industry standard measurements, explain the reason for that. It will demonstrate that you at some point questioned the status-quo, and received a sufficient answer that resulted in you maintaining it.

Some practical examples.

Today, tech companies are vying for your attention. YouTube prefers you watch their videos instead of Netflix's, or going to the movies, or reading a book. They want your time allocated to YouTube. This is why when you finish a video the next one is already queued up and a long list of tantalizing recommended videos is in clear view.

The way to measure attention is through Retention & Engagement.

Retention, getting you to come back (e.g. open YouTube X times per month). Engagement, getting you to use the product (e.g. watch 10 videos per day on YouTube).

The gold standard Retention measurement is "N-Day Retention". The goal with measuring Retention is to understand who and how often is coming back to your product. Amplitude, a tool I currently use has a great overview of measuring N-Day Retention.

Engagement is about measuring who and how often is performing the "key action" in your app. In YouTube's case one of those actions may be "watch video". The gold standard Engagement measures are: DAU ("dow"), WAU ("wow"), MAU ("mm-ow") DAU/MAU ("dow-mm-ow"). These metrics measure: Daily Active Users, Weekly Active Users, Monthly Active Users. These are the unique number of people that perform the key action (such as "watch video") on a daily, weekly and monthly basis.

DAU/MAU will demonstrate how engaged your user base is by reflecting the % of monthly active users that come back everyday. This can also be a measure of your apps "stickiness". Again Amplitude has great overview of this concept. It's also worth noting that although DAU/MAU is an industry standard metric for Engagement, it has shortcomings.

If your focus is Engagement & Retention, DAU, WAU, MAU, and DAU/MAU are great pulse metrics. Define a company-wide standard to an active user. Be very specific. For example an active user is an account holder that watches at least 10 seconds of video in 24 hour period. Then measure them consistently. They will help track if you're product is improving over time, and signal if things are getting better or worse.

You will also need to come up with metrics that are more specialized to your product and goals. What metrics are you going to try to improve that will directly impact DAU, WAU, MAU?

Here is a famous example from the early days of Facebook. When Facebook opened up beyond colleges, they entered hyper user acquisition and retention mode. Facebook's growth team united around the following insight: 7 friends in 10 days. The team discovered that users that added 7 friends within 10 days of creating a Facebook account were likely to remain an active Facebook user. Therefore their focus (features, experiments, design decisions) was channeled to getting as many users as they could into the "7 friends in 10 days" cohort. This "north star" behavior metric became one of their primary metrics for growing their engagement KPIs.

As a Product Manager you're constantly asking questions. What is the problem that needs to be solved? Why are we solving this problem? Who are we solving this problem for? What would these users do if our solution did not exist? What can we do to solve the problem? How do we prioritize multiple solutions? Which one do we do first? How do we know that our solution will solve the problem? Do users understand our solution? How do we measure if it's working? How do we know that we have the right metrics to measure if it's working?

Maybe one of the biggest challenges of being a Product Manager is figuring out which questions to ask and when to stop. If you're doing user interviews, how many people should you interview? What are their demographics? What are you seeking to learn from them? How many questions should you ask? What types of questions should you ask? What order should they be asked in? How should you collect the data? How should you present the data? How should you interpret it? What should you do with it? Each question's answer may result in additional questions, and each of those questions may result in different outcomes for your user interviews.

And why did you settle on doing user interviews? Did you do them because everyone says you need to talk to users? Or was it because your quantitative data was giving you mixed signals and you have specific questions that you'd like answered from a qualitative approach? What was your goal? Why was that your goal?

Quantitative data can be most vexing in regards to questions. How do you know you're looking at the right metrics? What could skew them? What if the data you have is "seasonal" and you don't realize it. What if the downward trend you're seeing is caused by a bug in your app that prevents data from being sent to your analytics system? Should you be measuring this or that? Are you measuring this because it's for your investors? Or did you read a blog post that said you should be measuring this? Should you measure more or less? If you have data for 100 users, is that enough to make conclusions? Or do you need 1,000 users, 10,000?

The more questions you ask, the greater the risk for decision paralysis. And yet if you don't ask the right questions, you risk focusing on something users don't care about. And so maybe you need to ask more questions in order to increase your chances to ask the right questions. Yet then you're asking more questions and spending more time answering them instead of acting on them. And just when you think you have enough information to act, you have one more question. And just one more after that.

That's enough right?