This week, we are joined by Crystal Widjaja, a data-driven product expert. Crystal has held product and data leadership roles at various companies, including Kumu and Gojek, a super app and delivery and logistics platform in Southeast Asia. In these roles, she led data teams and product teams.
Crystal is also a prolific contributor to Reforge, providing valuable insights through our programs, blog, and artifacts. If you enjoy what you hear in the podcast, you can find more of Crystal's rich insights at [Reforge.com](http://reforge.com/).
This week, we will be discussing:
Jason Cohen's blog post on "Metrics That Cannot Be Measured Even in Retrospect" and the challenges faced by data-driven product leaders. 📊
The lessons that product leaders can learn from the failure of Convoy, a major player in the freight brokerage business. 💡
We're starting with Jason Cohen's article in which he makes three key points:
1. Some widely discussed metrics, such as the impact of a single feature on product revenue, are not easily quantifiable. 🔢
Why? Customers often ask for many features during the buying process, but they end up not using them. However, this doesn't mean that these features don't affect revenue or aren't important. ❓💰❗
Our take? It's a bit crazy how many product management books and blogs tell you to measure the impact of a feature on acquisition, retention, and monetization. 📚📈
Instead, use TARS, a framework that stands for Target Audience, Adoption, Retention, and Satisfaction. ✨🎯😃
1. Target Audience: Only measure with the context of who your audience is in mind. 🎯
2. Adoption: Of the target audience, how many tried it? 🚀
3. Retention: Of those people, how many come back and use it again? ↩️
4. Satisfaction: This is the hardest to measure, but we want to know if they enjoy using it versus using it out of hated necessity. 😊😡
Or, instead of measuring the positive impact on revenue, Shashir Mehrotra suggests measuring the churn rate if you remove the feature to evaluate its impact. ⏪➡️⚖️
2. Measuring the impact of incremental activities on customer churn can be challenging.
Why? There's often a big lag between the action happening and the customer churning, making it impossible to measure the single action that caused the churn.
Crystal thinks this is the wrong point to make. In general, there's a sliding scale of metrics from difficult to easy things to measure, but nothing is really impossible.
1. There are some things that are really hard to measure. 😓
2. There are other things that are easy to measure but not always reliable. 😬
3. And there are things that are both reliable and easy to measure. 😄
The real question is, for the impossible side of the scale, can we come up with a proxy that's good enough? Do I really need perfect data? "You can come up with a proxy for everything, right?" - Crystal
3. Measuring the probability of risks is more of a "cover your ass" activity than actually being useful? 🤔
Why? Whether something has a 30% or a 70% probability of happening, it could still happen. So, "don't put probabilities on the slide at all. Only list the risks that you feel are so important that they either merit action or awareness." ❌📊
Fareed agrees - there are only two types of risks that matter:
1. Ones with a very high probability of happening ❗
2. Ones that are so severe that their impact is existential ❗
Anything other than those two should just be a "deal with it when it happens" situation. 💼🕒
Do a pre-mortem. Just sit down in a room and say, if this project fails, why would it have failed? Then figure out which of those fail points you want to try and preempt or solve against. 💭💡
Avoid a "Bike Shedding Discussion." "You are designing a nuclear factory, but everyone's spending all this time deciding, where should we put the bike storage shed? That must be the most important thing to talk about and define, and I'm just gonna force the conversation on this smaller piece, versus the like, building of the nuclear factory." - Crystal 🚳🚲🏭
Информация по комментариям в разработке