wait just start


As a Product Manager you're constantly asking questions. What is the problem that needs to be solved? Why are we solving this problem? Who are we solving this problem for? What would these users do if our solution did not exist? What can we do to solve the problem? How do we prioritize multiple solutions? Which one do we do first? How do we know that our solution will solve the problem? Do users understand our solution? How do we measure if it's working? How do we know that we have the right metrics to measure if it's working?

Maybe one of the biggest challenges of being a Product Manager is figuring out which questions to ask and when to stop. If you're doing user interviews, how many people should you interview? What are their demographics? What are you seeking to learn from them? How many questions should you ask? What types of questions should you ask? What order should they be asked in? How should you collect the data? How should you present the data? How should you interpret it? What should you do with it? Each question's answer may result in additional questions, and each of those questions may result in different outcomes for your user interviews.

And why did you settle on doing user interviews? Did you do them because everyone says you need to talk to users? Or was it because your quantitative data was giving you mixed signals and you have specific questions that you'd like answered from a qualitative approach? What was your goal? Why was that your goal?

Quantitative data can be most vexing in regards to questions. How do you know you're looking at the right metrics? What could skew them? What if the data you have is "seasonal" and you don't realize it. What if the downward trend you're seeing is caused by a bug in your app that prevents data from being sent to your analytics system? Should you be measuring this or that? Are you measuring this because it's for your investors? Or did you read a blog post that said you should be measuring this? Should you measure more or less? If you have data for 100 users, is that enough to make conclusions? Or do you need 1,000 users, 10,000?

The more questions you ask, the greater the risk for decision paralysis. And yet if you don't ask the right questions, you risk focusing on something users don't care about. And so maybe you need to ask more questions in order to increase your chances to ask the right questions. Yet then you're asking more questions and spending more time answering them instead of acting on them. And just when you think you have enough information to act, you have one more question. And just one more after that.

That's enough right?