Imagine you’ve made two decisions. One has resulted in a loss of 10 thousand dollars, while the other resulted in a gain of 10 thousand dollars. Would you say that the former was a bad decision, and the latter was a good one?
The first response that usually comes to everyone’s mind is “of course, the second decision was better”. But this is not necessarily the case. Can you think of a scenario where your response would be the opposite?
I’ve been thinking and reading about decision-making for many years while trying to put everything I learned into practice. Below, I summarize the strategies and mental models that I personally found most useful.
The Process vs. The Outcome
We all have a natural tendency to judge decisions based on their outcomes. This is not the worst heuristic, as there is a correlation between the quality of decisions and outcomes. But this heuristic has a major flaw — it doesn’t account for luck and incomplete information.
An individual can make a bad impulse decision yet get lucky and get a positive outcome. An extreme example could be buying a lottery ticket and winning. A less extreme example could be taking a high-stakes risky business decision without putting much thought into it. Should those decisions still be considered good?
On the other hand, another individual might dedicate the appropriate time to making a business decision, follow a thorough process, analyze all the available information, make the reasonable conclusions, and make an educated long-term bet. Now imagine that the business environment transforms in the upcoming years in a way that could not have been predicted. So the decision leads to a negative outcome. For example, a product launch fails. Does it mean the decision was bad?
Arguably, the first individual made a bad decision, and the second one made a good one. This might contradict conventional wisdom, but if you really think about it, the first person simply got lucky after following a bad process. And the second person did everything they could to make the optimal decision and took a calculated risk. That the outcome wasn’t positive in this particular case doesn’t tell us much about the person’s decision-making ability or judgment.
One reason it’s so hard to evaluate the process instead of the outcome in the business context is that it flies in the face of many other best practices. Hiring managers believe that past performance is the best predictor of success. Leaders extol results-driven culture. OKRs are set around outcomes — not processes.
But a deeper reason is that it’s more difficult to evaluate and establish processes. It’s much easier to evaluate results. Evaluating processes would require a deeper understanding of the context and how that context has evolved over time. It would also require disentangling various drivers of the outcome. You might be thinking… Who’s got time for that?
Realistically, I don’t see the outcome-based evaluation of decisions going away completely any time soon. But it doesn’t mean we cannot iteratively improve the process so we get better over time.
So what can we do to improve the process?
A simple yet powerful idea is to document big decisions. Just like venture capitalists prepare detailed investment memos before investing in startups where they weigh all the pros and cons, we can also write summaries for big decisions in business and life.
In addition to outlining all the available options and considering their pros and cons, here are some additional things to consider.
The Main Goal and Frameworks
Asking “what are we optimizing for?” might sound obvious but clarifying what you’re optimizing for can simplify the entire process.
Asking this question is particularly useful when a group of people is making a decision. You might assume that everyone has the same goal in mind, but once you ask, you might realize that people might have different opinions about the most important goal or metric to optimize for.
And if you’re into spreadsheets, you can even come up with a list of criteria and assign weights to calculate the expected outcomes. For example:
Expected value = the probability of a certain outcome * the payoff in case of this outcome.
The payoff can either be expressed in monetary value or subjectively perceived value — for example, how happy you would be on the 10-point scale.
Developing a framework like this can be useful for thinking and clarifying what really matters to you and aligning with others. But quantifying things can also give you a false sense of confidence so I’d use this approach with caution.
Want to take it up a notch? Look into decision trees.
Expected Value vs. the Worst Possible Downside
Besides considering the most likely outcome or the expected value, I find it very useful to consider the worst possible downside.
Imagine you have two investment opportunities. One is expected to grow 7% annually on average in the long-term and the other is expected to grow 10%. Based on this information alone, the second one looks like a better investment. At least until you account for volatility. The extreme case of volatility is that the investment goes to zero or close to zero.
I personally find it useful to think about the worst possible outcome I would be comfortable with in addition to considering the expected value.
What if I told you that the value of the first investment has less than a 0.01% chance of dropping to near zero, while the value of the second investment has a 20% chance of doing so, how would it change your decision?
A Portfolio of Bets
Your response to the previous question might have been “it depends”. It might depend on a number things — such as your risk aversion, personal situation, or whether this is the only investment or one of many.
The portfolio approach is often used in finance. It’s been shown that most people would be better off allocating most of their investments into low-fee portfolios that track the overall market rather than picking individual stocks. Similarly, venture capitalists always spread out their investments across many startups.
And a similar framework can be applied in business and life — it’s sometimes useful to ask yourself a question “am I making and testing multiple bets here or primarily committing to one thing?”
Optionality vs. Efficiency
It’s useful to consider the trade-off between future optionality and efficiency.
Some business decisions might bring in more revenue or reduce costs, but they reduce your future options. Many companies optimize for efficiency so they could grow faster or improve their bottom line. Generally, public markets and quarterly financial reporting also incentivize them to do so. But sometimes over-optimizing leads to reduced optionality in the future.
Nassim Taleb developed and popularized the idea of antifragility which is highly related to optionality. Wikipedia describes antifragility as “a property of systems that increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures”.
What’s interesting is that sometimes building a system like this or putting yourself in a situation like this requires a certain redundancy which is the opposite of efficiency.
The PPE shortage in the US during the 2020 COVID-19 pandemic is sometimes viewed as a recent example of optimizing for efficiency at the expense of optionality. If you focus on efficiency and optimize for the short-term demand, you might choose to store only a limited amount “just in case” and purchase the equipment from countries that can provide it at a lowest cost. However, if you want to optimize for optionality, resilience, and antifragility, you might choose to have more redundant storage and manufacture some equipment internally — so you don’t rely on other countries as much.
Similarly, certain choices in life and business create more options in the future than others.
Let People Form Their Own Opinions First
To quote Daniel Kahneman, a Nobel prize-winning psychologist and economist:
“Subjectively, it feels like you believe in something because you have the arguments for it. But it works the other way around. You believe in the conclusion, and then you create supporting arguments. That’s fundamental. Why do people believe in these conclusions? Partly because people we love and trust believe in the same conclusion.”
Our brains are naturally biased to pay attention to what others believe and generally follow them. So if one person in a group confidently announces their opinion first, many people will be likely to follow them. You can see this effect on steroids if that first-to-speak person is a boss.
This is why it’s usually better to ask people to form their opinions first and, ideally, write them down before discussing them with others. I made sure my team utilized this strategy when running customer research and focus groups with multiple people being interviewed at a time. Before discussing a topic as a group, we would present participants with a question and then ask them to write their opinions before discussing them. This always leads to a greater diversity of ideas.
First Principles Thinking
It’s also been shown that people are more likely to accept evidence that is supporting their pre-existing beliefs and more likely to challenge evidence that is contradicting them. Motivated reasoning and confirmation bias are well researched.
First-principles reasoning is the opposite of that. You don’t reason by analogy, assume that things can only be the way they are today, or copy what others believe.
Instead, you start with the underlying, well-established facts and build your beliefs and conclusions independently based on these facts.
Or you can start by considering commonly held beliefs and deconstructing them with the “five whys” method to see if they are justified or not.
Elon Musk explains first-principles reasoning in this amazing two-minute video:
Philip E. Tetlock, the co-creator of The Good Judgment Project and professor at the University of Pennsylvania, recommends assigning specific probabilities to outcomes instead of using vague words like “likely” or “possible”. Language is imprecise. There is a big difference between “possible” and “probable”.
And “probable” means different things to different people. For example, this chart shows what different people infer when they hear these vague estimates:
More context and charts can be found on the University of Illinois website and in this Github article.
I remember how our Strategy professor at UC Berkeley Haas School of Business, made us quantify probabilities of all kinds of events. This doesn’t come naturally and not a panacea but it usually elevates the conversation to another level.
Philip Tetlock also recommends considering many decisions when evaluating the accuracy of one’s predictions (see “portfolio of bets” above) in his book Superforecasting: The Art and Science of Prediction. That’s how “superforecasters” were evaluated. Once you know what probabilities that an individual assigned to certain events and whether the events actually took place, you can estimate their overall accuracy over time.
That being said, operating with specific numbers can give a false sense of security and make people overconfident in the face of unpredictable events.
Don’t Neglect the Emotions
The science shows that emotions play a critical role in our decision making.
For example, one study published by Jennifer Lerner, a Harvard Kennedy School professor, concluded that “fear increased risk estimates and plans for precautionary measures; anger did the opposite” and that “motions also predicted diverging public policy preferences”. Another study by the same professor found that sad people spend more on welfare recipients than angry people, as they’re less to likely blame poor people for their own misfortune. And this study concluded that “fearful people made pessimistic judgements of future events whereas angry people made optimistic judgements”. And yet another one found that “incidental emotions can influence decisions even when real money is at stake”. The list goes on.
So aiming at being mindful of emotions and accounting for their influence might be a wiser strategy than trying to be the real-life version of Star Trek’s Spock.
As a side-note, you might find it interesting that neuroscientists who conducted this study showed that even brain activation patterns differ based on the context and our perception of the decision. Here are the differences between “certain”, “risk”, and “ambiguous” decisions:
The overall conclusion is that certain brain areas activated more when people make “risky” and “ambiguous” decisions. For example, frontal pole of the prefrontal lobe was strongly activated in the “ambiguous” condition.
Don’t Sweat the Small Stuff
Very few decisions require thinking that deep. In fact, most decisions should be done quickly.
Sometimes we should stop ourselves from overthinking decisions that don’t matter much so we could save time and energy for decisions that really matter. Besides, speed is an enormous advantage in itself and moving quickly also generates additional information that, in turn, helps us make better decisions.
- Book: Thinking, Fast and Slow by Daniel Kahneman
- Book: Superforecasting: The Art and Science of Prediction by Philip E. Tetlock
- Book: Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts by Annie Duke
If you like my articles, please consider signing up for radically infrequent email updates. And say hi on Twitter! 👋