Executive Summary
Roles in the financial planning profession have always been molded in dialogue with the technology available; with each generation comes the expansion and compression of different opportunities. Today, much of the enthusiasm around artificial intelligence (AI) centers on its ability to automate non-client-facing work, theoretically enabling advisors to spend a greater percentage of their time in meetings. However, this vision quickly faces practical and human limitations.
In this 190th episode of Kitces & Carl, Michael Kitces and client communication expert Carl Richards discuss the tradeoffs between the emotionally intensive work of financial planning and the automative promises of artificial intelligence. For example, one of the promises of AI is that it allows advisors to have more client-facing time… but what is the upper limit of that effect? Increasing client-facing time from roughly 25–30% to significantly higher levels may not be desirable – or even sustainable – for many advisors. Beyond a certain point, more meetings do not equate to better outcomes, as the quality of advice and the advisor’s own well-being can deteriorate. In this context, optimizing for maximum efficiency risks pushing the profession toward an “assembly line” model that is poorly suited to relationship-driven, high-empathy work.
On the other hand, financial advisors have increasingly turned to AI as a potential solution to the industry’s widely discussed “talent shortage”, with the expectation that greater efficiency will allow firms to serve more clients at a lower cost. Yet this framing assumes that the core constraint is advisor capacity, rather than questioning whether the real issue lies in how advisors are trained, developed, and deployed. As firms rush to automate back- and middle-office functions, a deeper tension emerges: the same tools that promise to expand capacity may inadvertently undermine the very human capital pipeline the profession depends on.
At the same time, the push for efficiency raises critical concerns about the long-term development of talent. Entry-level roles – often responsible for the operational and analytical tasks AI seeks to replace – have traditionally served as the training ground for future advisors. Eliminating these positions in pursuit of short-term productivity gains may hollow out firms’ ability to cultivate experienced professionals over time. The result could be a paradox: firms solve for immediate capacity constraints while exacerbating future talent shortages by failing to invest in the next generation. This dynamic is already evident in hiring patterns that heavily favor experienced advisors, despite a lack of willingness to train newcomers internally.
Compounding this challenge is the nature of AI itself, which often performs best when paired with experienced practitioners who can validate and refine its outputs. While AI can generate recommendations or streamline analysis, it is not infallible. Advisors who have developed judgment through hands-on experience are better equipped to identify errors, contextualize advice, and apply nuanced decision-making. If future advisors are trained primarily through AI-assisted workflows without building foundational expertise, their ability to critically evaluate these tools may diminish. Over time, this could lead to a degradation of professional judgment and an increased risk of systemic errors that go unnoticed and compound.
Ultimately, the key issue is not whether AI can improve efficiency – it clearly can – but determining what firms are actually trying to optimize for. If the goal is short-term productivity or preparing a practice for sale, maximizing efficiency may be rational. However, for firms seeking to build enduring businesses, a more balanced approach is required – one that leverages AI to enhance, rather than replace, human development. By using technology to support training, deepen expertise, and expand advisor capabilities (instead of eliminating formative experiences), firms can position themselves to both serve more clients and cultivate the skilled professionals needed for long-term success.
***Editor's Note: Can't get enough of Kitces & Carl? Neither can we, which is why we've released it as a podcast as well! Check it out on all the usual podcast platforms, including Apple Podcasts (iTunes), Spotify, and YouTube Music.
Show Notes
- Kitces Report: How Financial Planners Actually Do Financial Planning
- #FASuccess Ep 040: Building A True Ensemble Practice Beyond Yourself And Training G2 Successors with Philip Palaveev
- The Pine Tree Problem with Dan Yerger
- How I Hired Two Support Planners: Designing A Job Posting To Attract Talented Early-Career Financial Planners by Daniel Yerger
- Kitces & Carl Ep 169: IS There A Future Of Financial Planning In The AI Era?
Kitces & Carl Transcript
Carl: Hello, Michael Kitces.
Michael: Hello, David Carl Richards III.
Carl: Hey, somebody at, is it FPA New England? Is that what they call it? Yeah, FPA New England.
Michael: Yeah. Massachusetts is now New England, yep.
Carl: Yeah. He came up to me and greeted me there and said, "Hey, David Carl Richards III." And I was like, "Oh, thank you. That's so... I'm going to..." I want the whole world to... I actually don't, but that's really nice.
Michael: I'll say it, because if you really want...
Carl: No, no.
Michael: For everyone who's listening, you heard Carl's latent desire just expressed itself. He couldn't help himself.
Carl: So, Freudian slip, is what that was. Yeah, no, no.
Michael: That you need to greet him, David Carl Richards III, his full given name.
Carl: Yes. That would be amazing. That would be amazing.
The Upper Limit Of Automation And Advisor Productivity? [00:58]
Carl: What are we talking about today? I'm sure some AI-related thing.
Michael: I actually did want to do the AI.
Carl: We've been avoiding that.
Michael: I feel like we were saying that jokingly. But I actually do want to do... We haven't done, I don't know, some AI things in a little while.
Carl: Oh, remember we're supposed to say it out, artificial intelligence.
Michael: Artificial Intelligence.
Carl: To remind ourselves.
Michael: Artificial intelligence things for a while. I feel like this were... So, now, we seem to largely be past the AI is going to replace all the financial advisors. That doesn't seem to be. We seem to be doing okay. Now, it's a tool to make us better and more efficient. And then we can handle more clients, and then we won't have a talent shortage because we'll finally have an advisors to serve all the clients that we want to serve. And then it brings the prices down and we can serve more people, which we can do because we're more efficient. So, I feel like we've shifted from, AI is the threat to now, no, no, no, AI is going to solve the talent shortage by making us hyper efficient so we can all handle way more clients at a lower cost and expand the reach of advice.
So, yay. I mean, I love the mission, vision of this. Like, yes, I would love to see advice expand and reach more. And there are very real challenges about what it costs and how that limits people's ability to afford it for large swaths of the population. But if you sort of follow the logic chain to this, right, which is something the effect of, AI makes us more efficient in the back office and the middle office so that we have all this additional capacity for clients and we can serve a lot more of them and do it more cost efficiently, then we get all these great outcomes of expanding reach of advice. We solve our, our "talent shortage" for the industry because we can just serve so many more clients with who we've got now.
And – because this is where my brain goes when I like doing sort of futurism thought experiments as it were – when I start falling down that path, I don't actually like all the outcomes that I start finding. There's the first order effect, it gets simpler, we get faster, more efficient, we help more people, yay. And then it starts getting a little weird to me very quickly. I mean, it starts almost out of the gate. I mean, the number of tech companies now that I've seen that have some version of ironically, quoting our productivity research, that the average advisor only spends about 25% of their time in client meetings, give or take a little. Another 50% to 60% is all the planning and investment and client servicing stuff that happens in between meetings, behind the scenes and all that. And then there's 15% or 20% that just the pure overhead, compliance management, professional development in an advisory firm.
And they're saying 20% something of the advisors time is in client meetings out of the 80% that they spend on client stuff. Wouldn't it be amazing if all 80% was on client meetings, and we made all the other things disappear. I'm like, okay, I don't love me, my admin work either. But I also don't relish the idea of... I mean, just do the math for a moment. 80% of your time in client meetings, we're talking six-plus hours a day, five days a week of client meetings. Maybe this is me and my still slightly introverted nature as I come to clients and small groups one at a time. But spending 80% of my time on clients because technology made all the other parts of my job disappear does not actually sound very happy. Like it's kind of a weird dystopian.
Maybe this is me, but I don't know a lot of people who relish the idea of six hours a day, every day in client meetings, and not the light, fuzzy client meetings where we're doing all the client things in the client meetings. So, there's hard conversations. There's a lot of emotional investment. Not every client discussion is happy or positive. Sometimes we have to break hard news or help them transition through difficult things. I mean, it's sort of struggling, why is this good? This feels very weirdly dystopian to me if we actually move towards this outcome.
Carl: I've got a couple of stories I wanted to share with you, but I'm really struck by the dystopian word. You know, I was just thinking about the incredible stories you hear about the Amazon factories. There was some really interesting...
Michael: Amazon.com.
Carl: Yeah, the fulfillment centers. Like, 18 sensors on every body. So, it's like that came to mind of really high-end factory work that's optimized for productivity because we know the advisors who make the most are the most productive or the ones that do more client-facing activities and we can get everything else out. That's fascinating.
Michael: Yes. And we have data on that.
Carl: Yeah, it's true.
Michael: But the most productive advisors that do all the client meetings, it's because they spend 32% of their time in client meetings instead of 25%.
Carl: Well, then why shouldn't we make it 80%?
Michael: The question comes like, what if we only spend a quarter of our time in client meetings, not because we're inefficient beasts who can't figure out how to better use technology? What if we only spend 25% or 30% of our time in client meetings because more than that becomes exhausting? And if that's my capacity before I'm emotionally drained after all my client meetings, then I take the rest of my time and I find productive things to do to support my clients in between meetings because I really don't want that many more meetings on my calendar. I mean, depending on how long your meetings are, 30%-plus of your time in meetings, you're probably coming up on 15-plus meetings a week.
Carl: Gosh, that's a lot, yeah.
Michael: And again, I mean, I feel like some of us, if you're doing a high volume of transactional business, you can do a lot of meetings. I did a version of that very early in my career. But the deeper financial planning stuff, that's a lot of meetings to do indefinitely. And granted, that's just the level that productive advisors are at today without all the fancy AI things we're talking about that's supposed to double and triple the number of client meetings.
Carl: The idea of taking kind of assembly line efficiency principles and applying it to cognitive knowledge work, and on top of cognitive, emotional intelligence work, I think there's some boundary condition there that's being violated. I remember this. This has come up a bunch where people are like... I've had people tell me that when I say I just got done snowblowing the yard or I just got done doing some landscaping, people are like, "Why wouldn't you hire the landscaper for $X because you could make $X." I'm like, "Because I can't do any more of the work I'm doing. I have to do the landscaping. Because if I don't go do the landscaping, if I don't touch the dirt, I won't be able to do the cognitive, emotionally-demanding work tomorrow." And it doesn't...
Michael: What if I'm not inefficient, my brain just really needed a break.
Carl: Yeah. And it turns out that, at least, my brain is not wired to feel the same way as you do, the client meetings. So, it's super... And I think, to me, the mental model that's helpful is to say, "Okay, interesting. Should we optimize every little thing for efficiency, right?" Second, does the kind of factory line, assembly line time and effort, do those models work when you're talking about emotionally impactful presence work? Let alone the financial planning you're doing behind the scenes, I think that's an interesting way to view this problem.
How Automation May Impact (Or Diminish) Entry-Level Jobs [10:40]
Michael: And then the next challenge I have to it, just kind of like follow this pull on this thread for a moment, but in the meantime, I'm also supposed to be getting more efficient because by eliminating all these back and middle office tasks to free up the time and have better margins and bring down the costs, so I can span the access, all the ideally good things....
Carl: Yeah, good things.
Michael: I gutted all the entry-level jobs that train everybody to eventually get to the end state job where you're an experienced advisor.
Carl: That one's a problem.
Michael: So, are we are we trying to get a productivity boom from AI to solve our talent shortage that will then drastically exacerbate our talent shortage because we got our own talent pipelines in the name of short-term efficiency, right? I can tell a story that the most successful advisory firm 20 years from now will be the one that completely eschews all that AI stuff because they'll have humans who spend the next 20 years learning how to be good advisors, and everybody else will spend time not developing any advisors.
And then 20 years from now or 10 years from now go, "Wow, our advisors seem really experienced and they're great, but they're retiring soon. And we have no young people in this firm and no young people in the talent pipeline. And we don't even know how to develop talent because we spent the past ten years gutting every entry-level position in our firm. And now, we don't even have any systems or tools or infrastructure to train anybody because we spent all the time trying to get factory-level efficiency out of the AI, and kind of forgot we're a people business that only survives by training and developing new people.
Carl: Yeah. This is super interesting to me. So, let me... Just two quick stories. One, the analogy I heard recently, which I thought was really, really insightful was, the printing press at the newspaper, the local newspaper here, or in your small town...not your small town, but the local paper, right? They've installed the printing press. There was a guy there that helped install it. He was the maintenance guy for 25 years. He's now retired. Anything goes wrong with that press, you go over to his house with a 12 pack of beer and a pizza, and you con him into coming over and fixing it. And then the day he passes on, nobody knows how this thing works. Like, as a metaphor, I think that's really interesting. And then the other one that was fascinating is...
Michael: A bunch of actual COBOL code.
Carl: Yeah. A really good friend of mine is a professor of physics, and he's tenured at a university in Germany and the university here at the same time. He spends a week here and a week in Germany, week here. And he's been doing this for years. He's amazing. One of the most amazing, smart people I've ever seen. And he was telling me the stories, he said, "There's going to be no more people me. People who know how to solve the problems I'm solving will not exist in five years." He said, he was telling me the story the week that this happened. He's like, "I just had the eight PhDs students that I'm their advisor." And we were reviewing a paper. And one of them had written a paper.
And he was flipping through the paper. And there's eight of them, me, and two other PhDs advisors on the call. So, there was something like 11 people. And he's flipping through the paper. He's presenting it, and he presents a table. And presents it and everybody's like, "Yeah, yeah, yeah," and he moves on. And I was like, something was wrong with that table. It's like, "Hey, go back." And they dug into it. And he had gotten that research from his AI research tool, which everybody's allowed to use. It was fine, but it was wrong. And my friend was like, "Did no one else see that? Because if this paper now gets published, it now becomes a reference document.” And one more layer on top of that, and now, we've built kind of a formal...
Michael: Once there's a paper that references the paper that references the paper that was wrong, you're never untangling this through.
Carl: Yeah. It's just like, there's not going to be anybody to untangle the thing that was wrong, where it's now going to become a piece of accepted knowledge, but it's wrong. And he's like, "We're going to..." And I think about that problem in our industry all the time. Like, how is somebody going to know? Do you know the correlation standard deviations and returns of each asset class that went into your Monte Carlo tool? Do we know those sorts of things? Do we know how to build an amortization table with a calculator? And I'm not saying that's important. I'm just saying it's interesting if we don't know those things.
Michael: Well, I mean, to me, there's a really interesting dynamic. I truly struggle with... I don't know how this plays out. So much of AI use right now is some version of, it's pretty good, but it's not perfect. And experienced people can use it, get some good leverage from it because they can still spot where it's wrong or off, or you need to step in and intervene, right. Or whether it's like the coders get it mostly there, but they can fix the harder problems that the AI can't solve. I don't know, the advisor will be able to queue up suggestions for planning, recommendations, and strategies, but that can still vet which ones really don't fit in this particular client situation because, because, because. That works because we were all trained in a non-AI world and now get to use the tools.
Carl: I know. Super interesting.
Michael: So, if we don't train people because now we're replacing all those tasks with AI, and then future people come in and having never learned anything, but that way, does the next generation even know how to use the tool, I'll say, as well, as effectively, as cognizant of what it does well and where the limits are?
Training Discernment In A World Of Artificial Intelligence [17:14]
Carl: Can I take the counter on this argument for just a minute.
Michael: Sure.
Carl: Just because I'm interested. This came up the other day. When we point to an AI tool that is been trained well on all the data, all the knowledge, a million case studies, whatever, and we say it makes mistakes, and we say, therefore, it's a problem. The one thing we don't know, and we don't have examples over and over and over is human advisors make mistakes, too.
Michael: Correct.
Carl: You know, so how much of this is...? This is just a very interesting thing to add to the dilemma, is like... I interviewed somebody recently. I won't mention who's it. I'm trying to be very careful here. She trains therapists at a really prominent institution. She'd been doing it for 20 years. And she said to me offline, not off the record, by the way, or else I wouldn't even be mentioning this, but she said to me offline, "The AI tool I've trained for therapy is better than 95% of the therapists I've trained." It's still not foolproof. It's just better. Because neither are the humans. We're not... It's kind of the argument for cars, too. Yeah, of course, Waymos getting accidents every once in a while. They just get in fewer.
Michael: Like, radically fewer. I mean, radically fewer.
Carl: Yeah. How much...?
Michael: We're learning now how bad humans are actually at driving.
Carl: I don't want to completely derail us, but in light of this idea of efficiencies, replace, hollowing out the apprenticeship, whatever, even though that term is, well, I mean, not often used very formally around here. But in light of all that, how do you think about the...? Because I was sort of wanting to avoid the two old guys saying, "Stay off my lawn." How do you think about that? Not that you're old, by the way, not that you're old.
Michael: Sorry. The gray is definitely filling in. I'm accepting it. The question that sort of pivots to mind to me is to ask, what really are we trying to solve for? So, for instance, the thesis that tech AI will automate all the middle or back office tasks for advisors so we can spend more time meeting with our clients presumes that the problem to solve was, how do I finally get to the point where I can have six client meetings a day? And maybe that wasn't actually something I wanted to solve for as an advisor. Nominally we say, we don't have enough talent coming into the industry, so we need AI to expand advisor capacity so we can serve more people because we don't have enough advisors coming in to serve. Or maybe the problem is that we don't actually have a talent shortage, we have a training shortage.
The other day, not to belittle my beloved profession, but you sit on your backside all day, giving people advice that mostly, I'll albeit not always, they're appreciative of, and make six figures in a world where a lot of jobs are highly threatened, and we have 90%-something retention rates. It's a pretty sweet, remarkably technology-immune, highly remunerative, psychically rewarding, community-engaging job. It's pretty fricking amazing. And basically, every firm I know complains about the fact that they can't find any experienced advisors to fill in because there's so much demand and we have a talent shortage. And as Philip Palaveev tells a version of this joke, everybody I know has spent five years complaining about the lack of any advisors with three years of training.
Carl: It's fascinating.
Michael: And if you just started training them five years ago, you'd have them by now, right? Is the punchline. We spent more years, we spent five years complaining about the shortage of advisors with three years of experience. Daniel Yerger calls this our industry hiring looks like a Christmas tree. Or if you imagine, it's like a chart of, how many years of experience, and what's the demand? Nobody wants to buy the first three years. After three years in, it blossoms into a beautiful, wide Christmas tree. Everybody wants to hire that.
Carl: And it's terrible.
Michael: But I mean, that really is what's playing out in a lot of firms.
Carl: Yeah, I know. It's very helpful.
Michael: And to me, what that says is we have a training shortage. And so, our solve for a training shortage is let's create technology tools that eliminate the training opportunities. Which is why, right, so what are we trying to solve for? If I'm really trying to solve for a talent shortage and a training shortage, maybe I don't actually need AI tools that eliminate a bunch of the entry-level jobs. Maybe I need AI tools that augment the entry-level jobs.
Carl: What's that crazy calculator you used?
Michael: HP 12C.
Carl: Yeah. Maybe we need more HP 12C.
Michael: Come at me 10B people, 12C, 12C.
Carl: Maybe we need more 12Cs in the world.
Michael: It's...
Carl: As a metaphor. It's fine. It's a metaphor.
Michael: You can go with it literally. I actually don't know if CFP class, they are still teaching reverse Polish notation calculators.
Carl: But it's a good point.
Michael: But, I mean, just the truth. And what are we actually trying to solve for? I mean, I say it a little bit tongue in cheek, but I'm not kidding to raise the question, what if the most successful firm 20 years from now is the one that eschews, maybe not all, but large swaths of the AI? Because everyone else will have ten years of great AI efficiency, and then an even more egregious talent shortage. And the people who build systems to train and develop next generation talent will just have a steady flow of talent. But then eventually, they grow to the point that you teach them to use AI to make things more efficient after they, I mean, I say, learn the hard way, right?
I'm trying not to do the, and you too must go uphill in the snow both ways to school, but learning only comes from doing the work, right, and until unless we get to the AI version where you just plug in matrix style to your brainstem, literally download how to fly helicopter. The brain only learns by doing the work and putting the reps in. And there's no evidence that the technology is changing that in any material way. I mean, we can maybe get new and different ways to engage with learning, but the brain has to still learn through repetition, exercise, doing it right, doing it wrong, feeling the pain of doing it wrong, learning not to do it that way in the future. It's why we, as parents, spend a lot of time telling children not to touch the stove because we've learned our lesson. And at some point, they still have to touch the stove and learn their lesson.
Carl: It's actually...
Michael: It's just part of human learning.
Do Firms Have A Talent Problem... Or A Training Problem? [25:50]
Carl: I think it's a really interesting thought experiment just to wrap up on. It's so interesting to just hold... And again, the value of thought experiments is they're just thought experiments. Like, you don't need to hold this thing too seriously or too literally. But if there were two paths, one was massive efficiency gains right now, J-curve financial advice, get, train a bunch of agents to do a bunch of stuff. That venture capital firm, I think I've mentioned on here, there was a... I know a venture capital firm that raised a new round and fired their 45 analysts. So, it's just the two partners and a prompt engineer, and they're doing better work.
Michael: Well, and I hear that, I'm like, or no...
Carl: I don't know who they are. I don't want to get in trouble.
Michael: Or were they just really badly managing people?
Carl: Or were they super greedy, or were they...? Who knows? But I'm just saying...
Michael: I'm not going to go in the greedy ad.
Carl: Let's just hold this.
Michael: If there were that many people operating that ineffectively, maybe they just weren't being managed well.
Carl: That's fine. As an example, let's pretend there's two paths. You have a firm where you're highly efficient in ten years from now, you have no beginning, middle, or senior advisors to take over. And you have another firm that says, you know what? Just for fun, 12Cs, handwritten stuff. I don't know how far you take the metaphor, but we're going to find AI efficiencies...
Michael: I mean, there's probably some more modernized version of how you teach the tech. There's companies sparring with that.
Carl: Of course.
Michael: Lydia, Shaping Wealth, Wealthstream: how do we teach advisors more effectively?
Carl: Sure. And we use AI tools for things that aren't the technical tools of advisory, honestly, just to free up more client-facing time, we talked about. But now, we've got this crop of people. We weren't the most efficient firm on the planet, but we've got this crop of advisors who are really, really well-trained and everybody else is trying to hire them. It's such a fascinating... What are you trying to solve for? And could you wake up ten years from now and go, "Oh, wow, we've hollowed this whole thing out." Or...
Michael: And then the differentiated successful firms are the ones that didn't, that didn't hollowed out.
Carl: That's right. That's a fascinating...
Michael: And again, depending on your time horizon. I mean, your example is good, right? So, if my retirement is within five to ten years from now, I can amp up the... You're not solving for a talent shortage. Per our episode a couple of weeks ago, advisory firms are disturbingly liquid. So, great, use the tech toys, get your incremental efficiencies, maybe get a short-term margin pop. You're not solving for, who is your talent 20 years from now? You're solving for, how do I get effective income out of my business before I sell it three, five, or seven, or maybe ten years from now? Cool, do your thing.
Carl: What are you solving?
Michael: There's nothing wrong with it, truly. Not here to judge or be negative to it. But if you're a longer time horizon, if you're looking at this, what am I doing to build a sustaining, enduring business? Just these questions like, did anybody really want to solve for, how do I get six hours of client meetings a day? I know someone out there, I have few friends I know who actually love all day, every day climbing. So, if that's you, you be you. I'm so happy for you. But most of us do not really want to do six hours of climbing five days a week. It gets tired. So, have we accidentally been solving for the wrong thing when we say...
Carl: Love that, yeah.
Michael: ...I'm solving for the talent shortage by eliminating the entry-level jobs that actually fill the talent in the future. I'm solving for six hours a day of meetings, but did I really actually want to solve for six hours a day of meetings? What are we actually trying to solve for? Did I want tech so I can do more hyper-nerdy analyses and go even deeper for my clients and still only do 10 to 12 meetings a week and not 30.
Carl: Love it. That's super good, super good. Thanks, Michael.
Michael: Awesome. Thank you, Carl. Cheers.
Leave a Reply