Executive Summary
Financial advice and technology have grown alongside each other – as technology has evolved and expanded, so too have advisory firm services. And as trading, portfolio rebalancing, and financial plan construction have become more efficient, advisors have increasingly focused on holistic advice, financial psychology, and behavioral coaching to complement their technical expertise. Now, with the rapid growth of AI across the financial services industry, questions about whether technology might one day replace human advisors are taking on renewed importance.
In this article, Derek Tharp, Lead Researcher at Kitces.com and Associate Professor of Finance at the University of Southern Maine, explores how advisors can use AI as a tool to augment – not replace – their advice and client relationships. While AI excels at tasks like prediction and computation, it struggles to understand human values, especially since people often don't have a clear or consistent understanding of their own. A client's reluctance to invest in equities or delay Social Security, for instance, may stem from deep-seated beliefs, personal preferences, or a mix of both. And as advisors know, financial advice often goes beyond identifying the 'correct' or 'most optimal' answer – it's also about connection, reassurance, and helping clients uncover their true priorities. While AI can mimic language patterns and simulate empathy, it lacks the lived human perspective necessary for genuine emotional resonance – which can limit the trust clients are willing to place in it, especially when navigating sensitive or complex life decisions.
That said, AI brings real strengths – particularly in data analysis, operational consistency, and computational efficiency. Tools powered by Large Language Models (LLMs) can summarize meeting notes, automate routine communications, and eventually synthesize insights across disparate systems. Advisors adopting these tools will need to ensure that the AI's output is current and that the technology integrates well with their workflows. But when used thoughtfully, AI can help advisors be more present, especially in client meetings!
This emerging hybrid model of the 'cyborg advisor' blends the technical precision of machines with the emotional intelligence and nuanced judgment of human professionals. Looking ahead, AI integration may center around CRMs, financial planning software, document management systems, or middleware AI assistants. Each approach has advantages and tradeoffs. Financial planning software may be particularly well-positioned to become the central hub, with its ability to generate accurate projections and precisely modeled planning scenarios. But CRMs, with their centralized data and integration potential, also have a strong case – especially if they expand their planning functionality. Whatever the tool, these systems can help advisors work smarter, provide more value to clients, and enhance the planning process – all without compromising the integrity of human judgment.
Ultimately, financial planning is rooted in trust, empathy, and the ability to navigate personal values – with a foundation of technical knowledge. As advice continues to evolve, advisors can combine their human strengths with the efficiency of AI to create better client outcomes. Those who embrace this hybrid approach – using AI to enhance, rather than replace, their capabilities – will be well-positioned to thrive in the changing landscape of financial advice!
The Overly Narrow "Man Vs Machine" Framing
The growth in AI-oriented tools in the past few years has been considerable. As is often the case with the advancement of technology, this has raised questions about whether human financial advisors are at risk of being replaced by AI. However, this perspective presents a false dichotomy between human advisors and AI systems – one that fundamentally misunderstands both the nature of financial planning and the complementary strengths of both humans and machines.
As has been argued previously, it's unlikely that technology will ever truly replace human advisors; instead, it will merely augment them, resulting in 'cyborg advisors' that combine the best of man and machine.
As with technological advancements of the past (like robo-advisors), the optimal approach to financial planning is neither exclusively human nor exclusively algorithmic, but a thoughtful integration of both. To understand why, it is helpful to look at the underlying mechanics of how humans make financial decisions and the inherent limitations that AI systems face in fully replicating the advisor-client relationship.
At the core of this issue is what we might refer to as a 'reward function problem' or 'beliefs-values entanglement problem'. In simplified terms, a reward function represents an individual's values (e.g., what they care about, what they're trying to optimize for, and how they make tradeoffs between competing priorities). Financial decisions are ultimately the product of both beliefs (predictions about how the world works) and values (what outcomes matter to us). And this gives rise to the fundamental problem: While AI systems are highly effective at prediction tasks, they face inherent limitations in understanding human reward functions.
In this discussion, which centers on the entanglement of beliefs and values, cognitive scientist Jeff Beck notes:
It's really difficult to figure out what your reward function actually is. A lot of work in behavioral economics has gone into trying to figure this out... but it turned out just measuring someone's reward function, which was a prerequisite for getting an AI algorithm that shares our values, is really impossible because you cannot disentangle belief and reward based on policy.
This entanglement between beliefs and values has profound implications for the future role of financial advisors. When two individuals disagree about a financial decision, the disagreement could stem from different beliefs about how the world works, different underlying values, or both. Understanding which is which requires a sophisticated dialogue process – one at which skilled human advisors excel, and where AI systems face substantial limitations. Consider an investor who is extremely hesitant to invest in stocks — this could be based on a prediction/belief about the current state of the world (e.g., stocks are about to decline), personal values (e.g., highly prioritizing the minimization of potential losses) or, perhaps most likely, an even more complex combination of both – plus a number of other factors.
The value proposition of financial advisors has always extended far beyond mere technical knowledge or prediction capabilities. Many have argued that an advisor's greatest value often lies in helping clients navigate the psychological and emotional aspects of financial decision-making – serving as an 'accountability partner' who helps clients clarify their values, understand their beliefs, and implement positive behavior change.
For instance, related to the value of behavioral coaching around portfolio management alone (i.e., a more narrow view of behavioral coaching focused on a portfolio and not broadened to include major purchase decisions, debt management, career decisions, etc.), frameworks such as Morningstar's Advisor Gamma and Vanguard's Advisor Alpha have found that the behavioral coaching could add as much as 150 bps to 200 bps of value.
Of course, this doesn't mean that AI has no role in financial planning. AI systems are clearly superior in a number of different domains, such as complex calculations, pattern recognition, and processing vast quantities of data. 'Cyborg advisors' will ultimately make use of both by thoughtfully considering where human and machine strengths lie.
Human Reward Functions
In artificial intelligence research, particularly in reinforcement learning, a reward function is the mathematical representation of an agent's goals or values – essentially, what the agent is trying to optimize for. In human terms, our reward functions represent our values, preferences, and the outcomes we consider desirable.
For instance, a hypothetical early-career professional might have a (simplified) reward function that looks something like the following:
Hypothetical Early-Career Professional's Reward Function: Building an emergency fund (very high value), paying off high-interest debt (high value), saving for retirement (high value), saving for a medium-term goal like a house down payment (medium value), affording their current lifestyle (medium value), and minimizing financial stress (high but negative value).
As this individual decides how to allocate their funds (e.g., a bonus), their decisions will reflect how they balance the values represented in their reward function – for instance, weighing the immediate relief of debt reduction against progress on other savings goals and present enjoyment.
The challenge of understanding human reward functions presents a fundamental limitation for AI systems in trying to replicate the role of a financial advisor. One of the primary difficulties in understanding human reward functions is that people can't articulate their own values with precision and consistency. For instance, a few complications in expressing/understanding one's reward function that could arise might be that they are:
- Incomplete (focusing on only a subset of what one truly values);
- Inconsistent (expressing preferences that contradict one another);
- Context-dependent (shifting around based on one's situation or recent experiences); or
- Evolving (changing over time as circumstances and perspectives shift).
Exploring an open-ended challenge like understanding another human's reward function is the type of problem-solving where humans actually have an advantage over computers in. Being human ourselves gives us an embodied perspective that computers don't have, and we often pick up on all sorts of subtle nuance that can suggest deeper meaning. For instance, an advisor might notice a client shift in their seat or break eye contact at a particular moment; these could be clues that signify something important worth exploring. Humans are more efficient in both processing this multi-modal stimuli and using our intuition to steer the discussion in the most useful directions.
Moreover, this difficulty in articulating values is not just a problem of communication – it reflects a deeper cognitive reality that humans themselves rarely have clear, conscious access to their own reward functions. As Beck observes, "A lot of work in behavioral economics has gone into trying to figure this out... but it turned out just measuring someone's reward function... is really impossible."
Try writing down your own reward function. Most people can't come close to producing something that coherently reflects their very nuanced values. Even something as seemingly simple as food preferences can quickly become complex. Maybe John loves Italian food in general and will never turn down a good lasagna, carbonara, or Bolognese, but for whatever reason, he dislikes Neapolitan-style pizza. John's food preferences likely reflect such a complicated web of values, experiences, and associations that it would be practically impossible to extract a fully articulated value function.
In an advisory context, one of a financial advisor's core roles is to always be on the lookout for misalignment as it relates to a client's values — whether this is because they've stumbled upon a nuanced exception (e.g., doesn't like Neapolitan-style pizza) or just because a client's values or life circumstances have changed over time.
The Entanglement Of Beliefs And Values
Perhaps the most significant challenge in understanding human reward functions stems from the inextricable entanglement between beliefs and values. In AI and related research, a 'policy' refers to the decisions an individual makes. Ultimately, these policies are a function of both the beliefs about how the world works (predictions) and values about what outcomes matter (reward functions).
Jeff Beck argues, "If you don't know someone's beliefs, you cannot infer their values. If you don't know their values, it's also very difficult to disentangle their beliefs."
Consider a client who is reluctant to delay claiming Social Security. This reluctance could stem from:
- A belief that the Social Security system won't pay out what was promised to them (a prediction about the world)
- A value that places greater importance on mitigating sequence-of-returns risk by drawing less heavily from a portfolio early in retirement (a reward function)
- Some complex combination of both beliefs and values
Beck also provides an interesting insight into how this entanglement manifests in disagreements:
When two people are arguing... They disagree about what action they're going to take. And if you don't want to resolve the argument, you of course instantly draw one of two possible conclusions. The person that I'm arguing with is either evil or stupid. They're stupid because their prediction, their beliefs are wrong... or they're evil because their reward function is not the same as mine.
In financial planning contexts, advisor-client disagreements about the best course of action are common. Productive relationships require a sophisticated process for untangling these factors, which could involve:
- Exploratory dialogue to understand the client's stated beliefs and values
- Skillful questioning to uncover unstated or unconscious beliefs and values
- Careful observation of non-verbal cues and emotional responses
- Expert guidance for correcting factual misunderstandings
- Building trust through human connection and empathy
Building trust may be a particularly important advantage that humans have over computers. While AI systems arguably have advantages here in some ways (e.g., if an individual can trust that their dialogue with a computer is truly confidential, they may be more willing to share details that they are hesitant to share with other humans), only a human can build trust through embodied presence and empathetic connection.
Adam Smith, in "The Theory of Moral Sentiments", posits that our moral judgments and social bonds are deeply rooted in sympathy (or today what we might call empathy) – our capacity to enter into the feelings of others. This ability to achieve a "correspondence of sentiments" is fundamental to building trust. Human advisors, who can genuinely interact and demonstrate empathy, activate this trust-building mechanism in a way that computers simply cannot. When a client is meeting with an advisor, they perceive not only the advisor's professional competence but also shared humanity with the person sitting across the table from them. Importantly, this perceived mutual 'sympathy' (or empathy) provides a foundation for a client's trust that an advisor is acting with benevolence, fulfilling their fiduciary duty not just as a legal or contractual obligation, but as an expression of a shared human ethic. It reflects a familiar feeling of treating others as we would want to be treated, and the expectation of reciprocity that naturally arises in such relationships.
Conversely, while an AI system can be programmed to follow rules meticulously and optimize financial outcomes, it lacks the capacity for genuine Smithian sympathy. It cannot feel the client's anxieties or aspirations, nor can it participate in the reciprocal emotional exchange that, according to Smith, underpins our deepest trust in others. A client may trust an AI's calculations or its adherence to algorithms, but this is a different form of trust—one based on system reliability rather than the felt sense of mutual understanding and goodwill. As Smith noted regarding the power of shared feeling:
But nothing pleases us more than to observe in other men a fellow-feeling with all the emotions of our own breast; nor are we ever so much shocked as by the appearance of the contrary.
— Adam Smith, The Theory of Moral Sentiments, Part I, Section I, Chapter II
This deep-seated human desire for "fellow-feeling" suggests that the trust placed in a human advisor – capable of demonstrating genuine empathy and engaging our innate sense of reciprocity – may be qualitatively different and potentially more resilient than the trust placed in an AI, positioning human advisors uniquely in their fiduciary role.
This is also highly relevant to the AI discussions around embodiment — the idea that having a human form gives a truly unique perspective that AI can try to replicate, but can never truly feel. AI models are already effective at mimicking human expressions of feelings related to love, death, and other complex human topics, but an AI model cannot truly feel the anxieties about mortality the way a human can. And because of that, humans may struggle to emotionally connect with an AI, sensing the absence of genuine emotion in the conversation and responding with distrust or detachment.

Nerd Note:
While it's easy to dismiss the computational power of the human brain in a "man versus machine" comparison, researchers estimate that we gather about 11 million bits per second of sensory data from our environment (about 10 million of which come from vision alone).
Our brains filter, compress, and pattern-match this information in real time before even reaching our conscious awareness. Today's AI systems would still need racks of GPUs to approximate that level of processing. Remarkably, our brains pull this off using only about 20 watts of power – about the same power as a dim lightbulb. And yet, the demands of communication consumes 35 times more energy than computation in the human cortex — reinforcing just how non-trivial human communication really is.
The Strengths Of AI
In a cyborg advisor model, the fundamental idea is that humans and machines each focus on what they do best. For AI systems, this means leveraging capabilities in areas that computers excel at — such as data analysis and pattern recognition, operational consistency, and computational efficiency.
Data Analysis And Pattern Recognition
AI excels at analyzing vast quantities of data to identify patterns that might escape human detection. In financial planning, this capability is already being applied to client note-taking and synthesizing vast amounts of information. Not only are AI note-taking tools powerful for capturing important details from a client meeting, but they also highlight the synergy between human advisors and AI tools, allowing humans to be even 'more human' in how they engage with clients. Being more present and less distracted by needing to take notes can actually help advisors better engage in the areas they excel at.
Today, well-prompted AI systems are quite effective at discrete tasks, such as summarizing individual meetings and drawing out key insights – sometimes even drafting meeting notes and email summaries to further save advisors time. However, these tools tend to be less effective when it comes to synthesizing more voluminous and diverse sources of data. For instance, reviewing a client's entire set of account statements, planning software output, personal data, email records, and other financial data to draw useful insights remains a challenge.
Notably, however, this limitation is less about what AI is capable of and more about how current tools are structured. Most advisors have key data spread out between multiple systems – their CRM platform, planning software, custodian/portfolio management software, email system, file storage, and possibly even a dedicated AI tool.
There are a couple of different challenges in feeding all of this information into a large language model (LLM) to be processed into meaningful output by AI. First, LLMs have 'context windows' – the maximum amounts of text it can process at one time measured in 'tokens' (with one English word averaging 1.3–1.5 tokens). For instance, if an LLM has a context window of 8,000 tokens, then both the data fed into the model and output provided by the model would have to be less than 8,000 tokens. Leading models today can often accept somewhere between 128k tokens (e.g., OpenAI's GPT-4o) to one million tokens (e.g., Gemini 2.5 Pro).
But a key point here is that feeding too much context into an LLM can diminish and even overload its ability to provide output. Moreover, current LLMs have also been found to have significant performance degradation as context windows get loaded up. While this probably wouldn't be an issue for even loading all information for an individual client into an LLM, it does mean that current 'off-the-shelf' LLMs would break down in functionality if someone were to try feeding all of a firm's data into them.
Instead, methods such as Retrieval-Augmented Generation (RAG) – in which an AI tool first identifies and retrieves relevant information to then be fed into the model – provide greater feasibility for scaling across an entire firm. Alternatively, a user interface that simply helps point an LLM to the relevant context could be a viable option (e.g., a traditional AI chatbot interface that could have a client drop-down that points to relevant context for the "Smith Household"), but this approach assumes the client data are structured in an easily digestible format for an LLM in the first place.
A secondary issue that will need to be addressed is data structuring to optimize use within AI models. While plain-text natural language can work well for LLMs (and has an advantage of being easily readable by both humans and machines), more structured formats such as markdown and JSON are even better for helping LLMs fully understand context and provide useful insights from data.
For instance, imagine an advisor has a bunch of individual plain-text client notes within a client file. A structured format like JSON would allow for better flagging of metadata – such as meeting date, purpose, and topics discussed – helping the LLM better understand what information is relevant. This could reduce the risk of errors, such as introducing information that is confusing or contradictory (e.g., the client is married in one note and unmarried in another).
But the key point here is that while current off-the-shelf tools aren't quite there yet – and real challenges do exist with getting there – there clearly is potential for LLMs to become even more useful in the future analyzing and processing firm data.
Operational Consistency
The domain of operational consistency is one area where AI systems demonstrate clear superiority over human advisors. While human advisors excel at navigating the nuances of client values and emotions, AI brings a level of computational reliability and procedural discipline that fundamentally transforms what's possible in financial planning practice.
One major limitation of humans, in general, is inconsistency. Even top professionals will occasionally have suboptimal days – and therefore, make mistakes – due to illness, stress, or fatigue. Humans are also susceptible to cognitive overload when juggling complex cases, and our attention inevitably fluctuates throughout the day. AI, on the other hand, operates with unwavering reliability – performing tasks with identical precision, whether it's the first attempt of the day or the thousandth.
This reliability also extends beyond mere technical accuracy. AI systems don't call in sick, leave the firm for a competitor, or experience burnout. Moreover, they aren't going to rush through procedural steps when pushed up against a deadline or cut corners simply because no one is watching. They apply methodologies consistent with the protocols they are trained on, and that helps ensure their tasks are completed with precision and consistency, regardless of their mood, the client they're 'assisting', or the time of day.
This ability to be particularly consistent is also beneficial in areas like compliance, where AI systems can not only adhere to a consistent process (as documented for them) but also very quickly implement changes when regulations shift and the need arises. Of course, AI must still be trained when those changes occur, but they rarely require the same training period and ramp-up time that humans inevitably need.
AI systems also avoid many of the cognitive biases that cloud how humans make decisions, providing a useful counterbalance to human processing tendencies that might otherwise subtly influence our work.
The distinction isn't simply that AI makes fewer mistakes than humans (though this is certainly true), but that AI systems can maintain strict standards regardless of workload, timing, or external circumstances. Try as we might, humans simply aren't as adept at operating this way.
Computational Efficiency/Ability
It almost goes without saying, but on top of the operational consistency of LLMs is their incredible computational efficiency. A well-prompted LLM can review, extract, and synthesize information from client statements or other financial documents in seconds – work that could take a human advisor hours.
Some industry AI tools, such as Thyme, already do this, and advisors could similarly use standard LLMs (with proper security protections in place) to do something similar on their own.
But it's not just about the efficiency of completing the task; it's about the efficiency with operational consistency. What might take a human advisor 30 minutes to do once could take another 15 minutes just to review and verify, albeit likely with a higher error rate on top of that, too.
Implications For Industry Software
As 'cyborg advisors' increasingly adopt AI tools into their processes, one of the more interesting industry battles may revolve around which existing platforms will be the go-to AI interface for client-related tasks.
As highlighted earlier, there's a real challenge in today's marketplace to find something that can reliably bring together the disparate sources of information that are important to an advisor's practice.
The most likely contenders to become that central interface include:
- AI assistant tools
- Email platforms
- Document management systems
- CRM platforms
- Financial planning software
AI Assistant Tools
Currently, the tools that are probably the farthest along in consolidating information across platforms – though it's unclear how long that will last – are dedicated AI assistance tools such Jump, Zocks, Vega, and Thyme. Essentially, these tools operate as middleware between client data (e.g., meeting recordings) and LLMs themselves. Through structured prompts and a nice user interface, they make the process of extracting relevant information for advisors much more user-friendly than going straight to an LLM itself.
These tools can, to varying degrees, synthesize data from different sources and bring it together on their platform for writing meeting notes, setting meeting agendas, assisting with writing emails, and more. Integrations between tools like RightCapital and Jump are now even allowing for pushing plan edits identified by AI directly into a client's plan.
However, no platform has yet achieved a true 'all-in-one' synthesis, and that will probably be their biggest challenge in the long run. An advisor's CRM and financial planning software are arguably the platforms that currently come closest (or could, if used that way) to having a really holistic view of a household for the types of prompts that could be very useful for advisors. For example:
"How much does John Smith have in his daughter's 529 plan, and where is the plan held?"
"Show me all households that with members that reach RMD age this year and have a taxable brokerage account."
While some CRMs might play nice with these AI assistant tools in the short run (at least as it relates to pushing notes into CRM records), there's a real question about how interested CRMs will be in allowing client data to be pulled into these middleware-type platforms in the long run. Indeed, we already see specialized advisor CRMs such as Wealthbox and Advyzon announcing the development of their own AI tools that could presumably cut out AI assistants entirely. One of the big value propositions of using an AI tool embedded in a CRM is access to the rich CRM data, so will we really see CRMs wanting to give this away once they have their own note-taking tools?
Similarly, for planning software, there's a question of how much planning tools will want to share in the long run (or whether they will develop their own AI tools?). And if AI assistant tools can't directly access planning software data, then that significantly limits their ability to answer many relevant financial planning questions.
Email Platforms And Document Management
Though perhaps less obvious as a candidate to consolidate information, email platforms are strong contenders for AI integration. Given the sheer amount of information that already flows through email (and the potential ease of automating that other information flow through one's inbox), this could be another area to watch.
Personally, in my own experience, I have used Shortwave and found it to be one of the most useful AI tools for assisting with client information. Because it can directly see my email, which includes not only communication with clients but also client updates that automatically get imported from various sources (e.g., text message archives, email contacts, email meeting summaries, automated updates about account activity from my custodial platform, etc.), there's a fairly rich amount of contextual data that Shortwave can take advantage of. If an advisor also used a tool like Blueleaf, which sends automated portfolio updates, or uses a planning software that provides automated plan update emails, this tool could become even more useful.
Companies like Microsoft and Google are also heavily developing their own AI tools, which have the added capability of being able to look more deeply into other areas of a business due to the suite of products often used by businesses. For instance, an advisor using Outlook and other Microsoft products could prompt a Microsoft-built LLM (Co-Pilot) to draw upon data from multiple systems to answer a question. Likewise, Google's Gemini can already do simple functions like this directly in Google Workspace. A potential big advantage here is how companies like Google and Microsoft could take advantage of data stored in Drive or SharePoint, although there's still work to be done to make this truly viable.
CRM
A CRM platform – particularly one with rich sources of client data flowing into it – is arguably one of the most logical places for an advisor's primary AI tool to be housed. CRMs are effectively in the business of managing access to integrations and getting as much information centralized as they can. Making data easy to digest does present a different challenge, but CRMs have a big leg up on other competitors when it comes to being the 'source of truth' for most firms and the primary place where crucial data is kept. Furthermore, as mentioned earlier, we already see specialized financial planning CRMs like Advyzon and Wealthbox developing specialized AI tools, as well as less niche competitors like Salesforce offering their "Einstein" AI tool.
The one weak point for CRMs may lie in their ability to interact with financial planning software. While some CRMs integrate with planning tools and can capture some high-level planning results, the functionality of such integrations is generally fairly limited, and it would be hard for CRMs to directly manipulate planning software in ways that would be meaningful. Without a CRM having direct access to planning software, there are many important questions advisors may not be able to get answers to operating solely within their CRM.
For instance, imagine an advisor asking, "What is John Smith's current plan probability of success?" It's possible a CRM could answer this simple question if there were an integration pushing this information through, but imagine if an advisor wanted to ask, "Tell me John Smith's probability of success if he retires three years earlier than currently planned?"
Without planning software built into a CRM or the ability to directly manipulate planning software from a CRM, this wouldn't be possible.
Financial Planning Software
This brings us to perhaps the most promising category of all as the 'home' for an advisor's primary AI tool: financial planning software. What makes it particularly intriguing is the potential to combine the richness of the planning insights with the potential boundaries that could be placed on planning output, helping to avoid the "fiduciary catch-22" that can come with AI-generated financial advice.
As Ben Henry-Moreland has noted, the opaque, "black box" nature of many AI tools, such as off-the-shelf versions of ChatGPT, Gemini, Claude, Grok, etc., could raise issues with how an advisor verifies that financial planning advice provided is in a client's best interest.
For instance, imagine we're able to feed relevant client data into an LLM that provides a client's Primary Insurance Amount for claiming Social Security. But instead of assuming the client will begin claiming benefits at Full Retirement Age, we're curious about the scenario if they were to claim their benefit at age 64 and 3 months. We ask the embedded LLM (wherever it is housed) to provide this information. We get an answer, but how do we know this calculation was done correctly? What if the LLM just happened to hallucinate a response or used the wrong calculation to adjust from the client's PIA to their age 64 and 3 months' benefit?
There are workarounds here. We could just view this as an initial recommendation and confirm all of the LLM's output. We could even ask the LLM to show and review its math, confirming the calculation ourselves, before deciding to trust the result.
However, while this works for simple calculations such as the one above, what about asking for someone's probability of success if they were to retire at 64 instead of 67? This is far more complex. Advisors aren't likely to rely on the LLM itself to produce a new Monte Carlo simulation each and every time a plan is run for a client. Not only would the variation in plan design and outputs over time be significant, but the sophistication of such outputs would be no match for the best planning software (e.g., handling complicated cash flows, nuanced state tax assumptions, etc.). Moreover, advisors wouldn't have a consistent user interface, reports, or other tools and resources to give to clients. The 'unbounded' nature of AI presents issues here.
However, if the planning software were integrated with an LLM, it would allow the LLM to manipulate the planning software but with added safeguards, requiring that the calculations only come from the planning software. Essentially, this provides fiduciary-grade reliability and consistency to planning calculations, which is essential to avoid the AI 'fiduciary catch-22'. Furthermore, if an AI note-taker lived within planning software, then a client's plan could be updated or a list of potential updates for an advisor to verify and approve could be generated at the completion of a meeting (or perhaps even live, during the meeting). Not only is there a huge planning efficiency opportunity here, but this would also allow advisors to go deeper into certain areas and maintain more accurate plans.
The planning opportunities at the firm level also get very interesting. Imagine being able to say things like:
- "Show me all clients with planning results in the range of X–Y."
- "For all clients with [State] Pension benefits, increase their benefits for 2026 by X%."
- "Show me all clients with HSAs held at [Financial Institution]."
With access to both crucial planning information and the ability to manipulate planning inputs, planning software could become far more important to an advisor's day-to-day operations.
Michael Kitces has posed the question: What would it take for planning software to be worth $15,000/year? He notes that the answer ultimately seems to come down to whether advisors view the software as a direct expense responsible for producing revenue (e.g., portfolio management software that does cost $10–15k each year) or as overhead, as is more commonly the case with CRMs and planning software, often costing closer to $1k per year. But as a supercharged AI assistant – one that's capable of both taking notes and building plans – would could it move planning software closer to being seen as a viable sales or labor-replacement tool that could command a premium rate?
The key point here is that direct access to the planning engine is a unique advantage that financial planning software would have as the 'home' for a cyborg advisor's AI.
Of course, there are other features that planning software doesn't generally have today that would be helpful for a solution like this (e.g., client meeting notes and other items that generally go into a CRM). It's possible we could even see planning software competing to offer more CRM-like capabilities. Which raises the question of whether it would be easier for financial planning software firms to develop or acquire tools that aggregate at least the most crucial items currently provided by CRM (e.g., email, meeting notes, more detailed client profile information) or whether it would be easier for CRMs to develop/acquire their own financial planning engines?
Ultimately, the emergence of the 'cyborg advisor' model represents more than a technological evolution in financial planning, it reflects a meaningful shift in the advisor-client relationship itself. The combination of human empathy and communication with the technical power of AI provides the ability for advisors to service clients even better than either humans or machines could do on their own.
Rather than fearing replacement, advisors have an opportunity to embrace these new technologies as a way that extends their reach and impact. Of course, there will be new challenges associated with this evolution, particularly around figuring out how to use such tools safely and effectively for clients, and what to delegate to technology versus what to retain as a human responsibility. But the potential is undeniable. The future may favor those who use AI to enhance – not replace – the human touch.