As a growing body of research shows, our brains are not quite the logical, rational decision-making machines we think they are – or at least, wish they could be. Instead, our brains take shortcuts; we substitute easier questions for difficult ones, often without realizing it, and respond accordingly with our words and our actions. This can be especially problematic in the world of financial planning, where we often ask clients to make difficult decisions with limited information.
As a result, questions like “what is an acceptable probability of success/failure for your retirement plan?” often get switched for other questions, like “how intensely bad would you feel if your retirement plan failed?” While the questions are still similar, there is an important difference: if you have not clearly defined both the meaning of success and the meaning of failure, your clients may misjudge the intensity of the consequences, leading to an irrational and inappropriate decision about how much or how little “risk” to take.
The inspiration for today's blog post is a chapter from Daniel Kahneman's "Thinking, Fast and Slow" where he explored how our brains often figure out how to solve difficult questions by – sometimes unwittingly – substituting an easier question, and answering that instead.
For instance, if someone asks us “How popular will the president be six months from now” most of us don’t actually go through the detailed analysis to project how changing trends will impact the president’s popularity down the road. Instead, our brains will tend to take a shortcut, like looking at how popular the president is now, and simply projecting that into the future (perhaps with some very small adjustments). Similarly, if someone asks you how happy you are with your overall life right now, you’re more likely to simply reflect what your current mood is right now – it’s an easier, more readily available question your brain knows how to answer. If your mood is good, you'll probably say life is pretty good; if your mood is bad, you'll answer accordingly, too.
The research shows that for some types of questions, the easier question we substitute may simply be one that reflects the current situation or environment, such as our mood for our overall life or the current popularity of the president for an estimate in the future. With some kinds of abstract questions, however, we have to go a step further.
Relating to Intensity
It turns out that one thing our brains are especially good at is thinking in terms of intensity, and relating things on different scales to a similar one based on intensity. For example, Kahneman’s book gives the example of a situation: “Julie read fluently when she was four years old.” If a researcher were then to ask you a relative question, such as “How tall is a man who is as tall as Julie was precocious” it turns out that most people give a fairly consistent response – in this context, we relate it to some height that is as remarkable as a four-year-old who can read is also remarkable.
In fact, within a consistent cultural environment – where we all tend to relate to the context similarly – we could also answer questions like “What level of income in your profession matches Julie’s reading achievement?” or “Which crime is as severe as Julie is precocious?” or “Which graduating GPA in an Ivy League college matches Julie’s reading?” The point, simply put – we’re quite good at relating comparative intensities on different scales and translating between the two.
Combining Substitution and Intensity Questions
In some situations, our brains will attempt to answer difficult questions by using a combination of substitution, for an easier question, along with intensity to answer that question. For instance, Kahneman's book notes that when we are asked a question like “How much would you contribute to save an endangered species?” what most people really answer is the question “How much emotion do I feel when I think of dying dolphins?” We grade the answer on our emotional intensity scale, relate it to an associated dollar amount, and answer the question accordingly.
In fact, in one experiment (which has since been confirmed repeatedly) first conducted not long after the Exxon Valdez spill, researchers asked various groups of participants about their willingness to pay for nets to cover oil ponds in which migratory birds often drowned. The participants were asked to state how much they would pay to save either 2,000, 20,000, or 200,000 birds. Logically, saving 200,000 birds should be worth radically more than saving just 2,000 birds, but the researchers found in fact that the average contributions of the three groups
were $80, $78, and $88 respectively. The number of birds made remarkably little different at all. In this case, it was because the participants first substituted the question “how much would you pay to save XXX birds” for the easier question “how intensely do you feel about the image of a helpless bird drowning because its feathers are soaked in thick oil” and then related the intensity of their thoughts about a single bird into a dollar amount. While our brains are good at relating intensities, the substitution effect meant that the participants had virtually entirely disregarded the part of the question about the number of birds (without even realizing it) and the magnitude of the project and its success or failure.
Substitution and Intensity Questions in Financial Planning
Reading through this discussion on how our brains evaluate difficult questions was striking to me, because we often ask similarly challenging questions of clients as a part of the financial planning process. And given the research on how our brains think, it’s almost certain our clients engage in a similar process – which is somewhat concerning, as the consequences can lead to very distorted conclusions, such as the study participants who were only willing to pay 10% more to save 100 times as many birds.
For instance, imagine the situation where the client is asked to decide what probability of success is acceptable for his/her retirement plan. As the conversation often goes, the client is asked which plan is preferable: one that has an 85% probability of success, a 95% probability, or if the client would like to save more/spend less/retire later so that the plan can have a 99% probability.
In practice, our brains have little framework to really evaluate such probabilities; in the end, we don’t really know how to evaluate a retirement that has a 90% probability of succeeding. Instead, the research suggests that we probably substitute an easier question, such as “how intensely bad would you feel about running out of money in retirement?” Given our ease of converting intensity questions on different scales, the brain can easily answer this question, evaluating the intensity of negative feelings about the potential adverse outcome, and then converting them to a 1% - 100% scale. Clients who have intensely bad feelings about a potential retirement “failure” will give higher required probabilities of success, while clients who are less emotionally distressed at the thought will answer lower probabilities. Thus, notwithstanding the original question, clients who suggest a preferred probability of success are probably not actually indicating how much risk (of failure) they wish to expose themselves to, but instead are indicating how distressing in their minds that failure would be.
Framing the Consequences In Monte Carlo Analysis
The reason that this substitution effect matters – where clients answer the question “how much risk would you like to take” with the easier “how intensely bad would you feel if the adverse risky event happened” is that as planners, we often do a poor job of effectively defining exactly what the risky outcome would be.
For instance, think again about the scenario where a client is asked what probability of success would be preferable for a retirement plan: 85%, 95%, or 99%. In asking this question, we generally leave it up to the client to imagine what failure would look like. Without any other information, the logical conclusion – and in fact, the one sometimes implied by the planner – is that failure means a total loss of assets. Lifestyle and enjoyment ends. The family home is sold. From now on dinner is dog food, and you can never afford to see the grandchildren again.
Yet when we look at the realities of a retirement plan and how the financial planning process is executed, this is really a gross mis-statement. As discussed previously on this blog, the reality is that the probability of “failure” would be more actually characterized as a probability of adjustment instead. It represents the odds that the client would be heading down an adverse path that, through monitoring, might require a mid-course correction to get back on track. As research by Guyton in the Journal of Financial Planning has shown, “mere” spending cuts of 10% in difficult times can be effective to get clients back on track, and in fact they generally can make up the spending cuts and more in the future when the good returns come back.
So imagine a world where two clients are asked a similar but different question:
Client A: “What probability of success would be preferable for your retirement plan, 85%, 95%, or 99%?”
Client B: “What probability of success would be preferable for your retirement plan, 85%, 95%, or 99%, where a ‘failure’ means you would have to engage in a 1-5 year spending where your spending is cut by 10% until the market recovers?”
In reality, both scenarios describe the same situation, at least how it would likely play out with a planner engaging in an ongoing monitoring process with a typical client (who could intervene with a mid-course correction if the client was heading towards a danger zone). Except the reality is that because the scenarios have a very different implied outcome – client B faces “just” a potential 10% spending cut for a few years, while client A is left to his/her own imagination about how catastrophic the failure must be (given no other information) – the clients may convey very different comfort levels and risk preferences, even though it’s actually the same planning scenario, because they're expressing different intensities around what they think are different outcomes!
Which means the bottom line is that in situations where clients are invited to make a decision about how much risk to take, it is crucial to define what a risky event or an adverse outcome really means. Is it a retirement plan that requires some mid-course corrections with moderate spending cuts, or total destitution? Is it a portfolio that could experience a 20% pullback, or a 100% total loss? If the consequences aren’t defined clearly, the client at best will simply infer whatever consequence he/she thinks would be the result, judge the intensity accordingly, and make a decision about risk taking. At worst, the client infers the wrong risky outcome, leading to an entirely inappropriate conclusion about risk taking. Because like it or not, the research – as discussed in Kahneman’s book – clearly shows that this is how our brains operate.
So what do you think? Do you always clearly define the consequences of a potential risk decision? Do you frame for clients that probabilities of success are about success versus total failure? Or success versus moderate mid-course corrections? Do you think it would change their decisions about which retirement plan they choose?