Executive Summary
AI tools like ChatGPT and automated meeting notetakers offer meaningful time savings and productivity gains that can be a game-changer for supporting RIA marketing, client service, and investment research efforts. But the same capabilities that make AI so useful – namely, its ability to analyze and generate human-like content at scale – also introduce complex compliance and regulatory risks. In the absence of clear regulatory requirements for how to use these tools, RIAs have been largely left on their own to figure out how to address challenges associated with protecting client privacy, screening out inaccurate or biased data, and maintaining necessary books and records. However, this doesn't mean that AI tools must be avoided altogether. With appropriate due diligence and training, firms can benefit from the time-savings potential of AI while also managing associated compliance and security risks.
In this guest post, Rich Chen, founder of Brightstar Law Group, explores the key steps advisors can take to use AI productively and in alignment with regulatory requirements.
Mitigating risks starts with understanding how AI tools process, store, and secure user data. Tools that do not retain or train on user data are generally preferable, and firms can prioritize enterprise-grade solutions that offer configurable access controls and auditing features. Internal policies can further reduce risk by requiring pre-approval for AI tool use, training employees to avoid submitting sensitive client information, and using redaction tools to strip Nonpublic Personal Information (NPI) from prompts before submission.
Beyond these safeguards, firms can train employees on prompt engineering techniques to improve the relevance and accuracy of AI outputs. Formal review processes can also help catch hallucinations and factual errors – especially when AI-generated output is used in marketing content, investment research, or planning advice. Additionally, it's crucial to learn how to recognize and monitor for signs of bias in AI's output that may unintentionally influence advice or skew the tone of client-facing content. Because AI tools are trained on large and often uncurated datasets, their output can reflect common industry norms or marketing-driven assumptions. Ongoing audits and compliance reviews – especially for investment recommendations or public-facing content – can help firms detect and address biased or misleading information before it is proliferated.
Recordkeeping is another key compliance obligation. Under the SEC's Books and Records Retention Rule, RIAs must preserve documentation that supports client advice and decision-making, including AI-generated meeting notes, marketing content, and investment analyses. To stay compliant, firms should retain both the prompts and outputs from any material AI interactions, store meeting transcripts alongside summaries, and ensure archiving systems are structured in a way that allows for SEC retrieval. Some AI tools now support integrated archiving, making this process more scalable over time.
Ultimately, while AI tools offer transformative opportunities for increasing efficiency and scale, they also require a thoughtful approach to ensuring compliance with an RIA's fiduciary and other regulatory obligations. RIAs that invest in due diligence, training, and oversight can confidently harness the power of AI to enhance client service while maintaining the high standards of trust, care, and diligence that their clients and regulators expect.
Artificial Intelligence (AI) is rapidly changing how RIAs conduct business, and those who fail to adopt AI solutions may find themselves at a competitive disadvantage. Whether drafting customized client emails, summarizing meetings, analyzing large quantities of data for investment research, or preparing marketing materials, AI tools enable RIA firms to accomplish in minutes what once took hours or even days. However, despite the productivity benefits, AI comes with a new set of risks that cannot be ignored.
The reality is that the vast majority of AI tools do not independently think and reason. Rather, they predict outcomes based on large volumes of language and data, which may potentially expose sensitive information to privacy risks. These tools are designed to generate human-like responses by identifying patterns, not by understanding meaning. As a result, while the output they produce often appears fluent and convincing, it can also be inaccurate, incomplete, or biased.
As such, AI tools create complex compliance challenges for RIAs. How can RIAs protect client privacy when AI tools process and possibly train on sensitive information? What obligations exist to verify the accuracy of AI-generated content? Could AI outputs introduce conflicts of interest? Do client communications, drafted marketing pieces, and investment analyses need to be retained under the Advisers Act's Books and Records Retention Rule? These are issues the SEC has already begun to explore, and they are already squarely within focus during SEC exams.
Because AI tools vary widely in function and design, the compliance risks they pose can differ significantly. Some of the tools that are most commonly used by RIAs today – including conversational assistants like ChatGPT, Perplexity, and Claude, along with AI notetakers that generate meeting summaries and insights – raise particularly pressing questions around data privacy, accuracy, bias, and recordkeeping. Understanding how these tools operate and where regulatory risks may emerge is essential for firms seeking to use AI while staying within the bounds of their regulatory obligations.
How AI Tools Like ChatGPT And Notetakers Actually Work
At their core, most public-facing AI tools and applications are powered by Large Language Models (LLMs). These models function like the brain or engine of an AI tool or application, trained on massive amounts of publicly available text in the form of books, websites, articles, and more. This training allows them to learn the structure of language, how sentences are formed, and the way different ideas relate to one another. When prompted with a question or statement, the model analyzes the input and predicts the most appropriate response based on patterns it has seen before. As noted earlier, LLMs are not capable of independent thought or reasoning.
Most tools associated with AI – like ChatGPT, Perplexity, or Claude – are built on one or more LLMs. Although these tools share a common foundation, they can vary significantly in how they function. Conversational assistants, for instance, incorporate additional systems that make them more interactive, responsive, and adaptable to user needs. While the core LLM is responsible for generating language based on patterns learned during training, the surrounding architecture introduces features such as memory, enabling the tool to retain and recall user-specific context across interactions.
Many also integrate external tools, allowing them to retrieve real-time information from the web, run code to analyze data, or even generate and edit images. These capabilities extend the LLM's usefulness beyond static text prediction, enabling it to respond dynamically in a wider range of situations. Together, these added features transform the predictive engine of an LLM into a versatile conversational assistant.
AI notetakers are a different product category but often incorporate a core LLM at various stages. They are designed to join virtual meetings, listen in real time, and produce transcripts, summaries, and action items. Their functionality involves several different technologies working together, starting with speech recognition software that converts audio into text that can be analyzed.
Once the spoken content is transcribed, an LLM may then step in to take the raw transcript and clean up the text, summarize the content, identify who said what, and flag key decisions or follow-up tasks. In some cases, it may even label portions of the discussion based on themes like "strategy," "budget," or "next steps".
So while tools like ChatGPT operate entirely in text, notetakers add another layer by first transcribing audio before engaging in similar language processing tasks. Both solutions ultimately rely on LLMs, but notetakers are tailored for meeting documentation and team collaboration.
Regardless of how different AI tools work, the key point is that none of these systems truly ‘understand' the content they process. They don't reason like humans or hold beliefs or opinions. Instead, they are simply making high-probability guesses based on language patterns observed during training. Having seen billions of examples of how language is used, they gradually learn to mimic those patterns in their outputs.
For instance, if an AI tool is asked to write a client update about market volatility, it will draw on the structure of similar language it has seen to reproduce how financial writers explain risks (e.g., how they describe market trends and how they sign off with compliance-friendly disclaimers). It doesn't know what a bear market is, but it has seen enough examples to recreate how the term is typically explained.
This is why AI-generated content can sound so fluent and persuasive, yet still be prone to inaccurate or misleading output. They are not infallible; rather, the quality of their results depends on input data that may itself be flawed or biased. These limitations are important to understand, especially in a regulated environment where accuracy and objectivity are essential.
Still, these AI tools are specifically trained to communicate in a way that sounds human. After the tool is trained on its initial dataset, its process is further refined with examples that are identified by real people as good and bad responses. This helps the tool learn which answers feel most natural, polite, or helpful.
As a result, the tone and style of AI outputs can be quite convincing, often mimicking how a skilled professional might communicate – answering questions with nuance, offering caveats, or providing explanations that build on prior statements. But that tone can be deceptively reassuring, leading users to place greater trust in the output than may be warranted. Given these risks, a deeper understanding of how AI tools work is essential for using them responsibly in a highly regulated environment.
How RIAs Can Safeguard Client Data When Using AI Tools
As noted earlier, AI tools routinely store user data and, in some cases, train on such data to improve performance. This becomes particularly problematic when outputs generated for unaffiliated third parties reflect sensitive information originally shared by another user.
For SEC-registered RIAs, these considerations must be evaluated in the context of Regulation S-P, the primary framework governing client privacy for Federally registered investment advisers.
Regulation S-P
Adopted by the SEC under the Gramm-Leach-Bliley Act, Regulation S-P governs the treatment of Nonpublic Personal Information (NPI) by RIAs, broker-dealers, and investment companies. It imposes two core obligations on RIAs:
- RIAs must provide clients with clear privacy notices, explaining what information is collected and how it is shared. Clients must also be given an opportunity to opt out of certain types of data sharing with non-affiliated third parties, especially for third-party marketing purposes.
- RIAs must adopt and implement written policies and procedures reasonably designed to protect client NPI from unauthorized access or use.
While Regulation S-P does not prescribe specific technical safeguards, it does require firms to ensure that any system touching client data – whether a server, vendor, or AI platform – is reasonably secure and appropriately configured.
With respect to the use of AI tools, it's vital for RIAs to clearly understand not only what the tool does but also how it processes, stores, and transmits user data to ensure client data remains protected.
Although Regulation S-P applies specifically to SEC-registered RIAs, most unregistered RIAs are subject to similar privacy regulations enforced by the Federal Trade Commission. State-registered RIAs are subject to various other state privacy regulations that are enforced by state securities regulators.
Best Practices For Protecting Client Data
There are numerous practices that RIAs can take to safeguard client data. The most effective method is to use AI solutions that neither train on user data nor retain it longer than necessary. Fortunately, many enterprise-grade AI tools now offer configurations that align with this goal. OpenAI, for example, offers business solutions like ChatGPT Enterprise and API-based tools that do not use submitted data to train the underlying model.
RIAs performing due diligence on AI vendors can press for clear, written answers to questions such as:
- Does the platform retain or log user inputs? If so, for how long, and where are they stored?
- Are prompts and responses used to train or fine-tune the model?
- Is there an enterprise version that silos data and prevents cross-account sharing?
- Is data encrypted in transit and at rest?
- Can the firm configure access controls, user roles, or activity logs for compliance oversight?
If customer representatives for AI tools can't be reached directly, firms can refer to the AI platform's "Terms and Conditions" or "Terms of Use", which often include detailed data handling disclosures.
RIAs may also benefit by adopting policies and procedures requiring personnel to request approval before using AI tools for business purposes. These policies can include a review process to screen AI tools for data security risks and compliance with applicable regulations prior to approving them for use by firm personnel.
For firms that permit AI tools that do train on their inputs, there are still best practices that can be followed to safeguard client data, though compliance will require more proactive monitoring and vigilant review.
First and foremost, best practices in these cases begin with training employees to avoid inputting any client NPI into such AI tools. Several software providers and open-source tools offer automated removal of NPI; they work by scanning text for sensitive data – names, account numbers, addresses, Social Security numbers, and other identifiers – and either redacting or anonymizing it before processing.
These tools use pattern recognition and named entity recognition to identify sensitive content. Some even allow for the creation of custom dictionaries or logic rules (e.g., always removing client account numbers in a particular format or flagging unusual combinations of names and numbers). Using these redaction tools can offer a useful layer of protection, especially when testing new AI workflows or evaluating third-party services. Nonetheless, they should still be screened to understand how they use and store firm data and to ensure they don't introduce additional privacy risks.
Another way to reduce exposure is through role-based access controls, which limit access to AI tools to only those personnel who need it. As discussed in the next section, restricting access to those who are qualified to use these tools can improve output quality, since more experienced users are likely to craft better prompts for the intended use cases.
Checking Accuracy Of Outputs And Avoiding Hallucinations
One area where RIAs must be especially vigilant is verifying the accuracy of machine-generated outputs. Whether drafting a client-facing communication in ChatGPT, summarizing a planning meeting with an AI notetaker, writing a blog post with help from Perplexity, or producing investment due diligence summaries highlighting research sourced from Claude AI, the risk of errors is real. Inaccurate outputs can create significant liability for RIAs and trigger violations of one or more anti-fraud provisions under the Advisers Act.
The Anti-Fraud Provisions Of The Advisers Act
Regardless of SEC registration status, RIAs are bound by obligations to ensure the accuracy of information and advice they communicate to clients. Many of these obligations are found in the anti-fraud provisions in Section 206 of the Advisers Act.
Section 206(1) prohibits intentional fraudulent conduct by investment advisers, including:
- Making material misrepresentations or omissions to clients or prospects;
- Participating in schemes to deceive clients; and
- Engaging in manipulative practices designed to mislead clients or gain an unfair advantage.
However, not all of the anti-fraud provisions require intentional misconduct to trigger a violation. For instance, Section 206(2) prohibits any "transaction, practice, or course of business which operates as a fraud or deceit upon any client or prospective client" – whether intentional or negligent. Prohibited conduct includes making false or misleading statements, even when they're negligent and made unintentionally. This means that a violation can occur if an RIA fails to exercise the care that a reasonable RIA would under similar circumstances. While this has yet to be tested in a live case involving the use of an AI tool, it's reasonable to assume that the SEC would require RIAs to verify the accuracy of AI-generated outputs they use in their practice.
Section 206(4) applies only to SEC-registered RIAs and takes the standard one step further by prohibiting conduct that is fraudulent, deceptive, or manipulative in effect, even without intent to deceive or actual harm. In essence, this provision authorizes the SEC to create rules designed to prevent misleading practices, and violations those rules would constitute violations of Section 206(4) itself.
Two particular rules under this section focus on false and misleading statements:
- Rule 206(4)-1 – The Marketing Rule. This rule targets "advertisements", which generally include any communications distributed to more than one person that is designed to attract clients or investors. This includes website content, social media posts, TV or radio broadcasts, podcasts, marketing webinars, brochures, pitchbooks, presentations, and other written or oral communications. A violation can occur based on negligence alone – intent is not required.
- Rule 206(4)-8 – Pooled Investment Vehicle Communications. This rule applies to investment advisers to pooled investment vehicles, such as hedge funds or private equity funds. It prohibits false or misleading statements "in connection with the offering, sale, or operation of the fund." Which means any false or misleading statements in marketing materials (such as pitch decks, fact sheets, etc.) or private fund offering documents (including any private placement memorandum) could trigger liability.
Section 207 of the Advisers Act prohibits any person from "willfully making any untrue statement of a material fact in any registration application or report filed with the SEC," including Form ADV. These filings must be accurate and complete. Misstatements – such as overstating assets under management, omitting disciplinary history, or mischaracterizing fee structures – can give rise to liability under this section.
Importantly, the key term here is "willfully," which has been interpreted to mean that the person voluntarily committed the act – not necessarily that they knew it was unlawful or acted with intent. For instance, if an RIA submits a Form ADV with inflated AUM figures, that conduct may still be deemed willful, even if the RIA didn't realize it violated any law.
Best Practices To Avoid Inaccurate AI-Generated Output
To minimize the risks associated with inaccurate AI outputs, firms can establish thoughtful, disciplined procedures – beginning with proper training for employees authorized to use AI tools.
One of the most overlooked drivers of AI reliability is the quality of the input given to the AI tool to generate a response (often referred to as the "prompt"). Prompt engineering refers to the practice of structuring inputs to improve the accuracy and relevance of outputs. Think of a prompt as a question posed to a very well-read but highly literal assistant. If the question is vague, poorly worded, or assumes too much, the answer is more likely to be off-target. However, if the prompt provides a detailed description of the objective, appropriate background facts, and helpful context, the output is much more likely to generate accurate results.
For example, asking an AI tool, "What is a good portfolio allocation for retirement?" may result in a general and potentially inaccurate answer. On the other hand, reframing the prompt to say, "Explain the factors an RIA should consider when creating a portfolio for a client who is five years from retirement and risk-averse given that the client currently has $200,000 in annual income and a net worth of $1,000,000," significantly improves the likelihood that the response will be useful and appropriately nuanced.
Given these dynamics, firms may benefit from providing structured training before allowing personnel to use any approved AI tools. Such training can emphasize the importance of:
- Avoiding ambiguous terms or open-ended questions;
- Including context or constraints relevant to the task (e.g., "assume US-based regulations");
- Providing helpful examples where applicable;
- Requesting sources, citations, or rationale where appropriate; and
- Specifying the tone, purpose, or audience of the output.
Providing employees with real-world examples and training in a safe environment can mitigate the risk of false or misleading outputs. Even so, prompt quality alone can't guarantee that an AI tool will respond with perfect accuracy. As noted earlier, LLMs don't ‘know' facts in the traditional sense. Rather, they predict what text is likely to follow based on statistical patterns in training data. Unfortunately, this can still lead to inaccurate outputs despite effective prompt engineering.
Because even well-crafted prompts can produce inaccurate responses, it's critical to implement verification steps. This is especially relevant when AI outputs are used to support client communications or investment recommendations. Verification strategies can look different, depending on the type of tool utilized. For conversational tools like ChatGPT, Perplexity, and Claude, source citations or links to information referenced in answers may be included. These can be useful, but they still must be verified. Just because an AI tool cites a resource or references a study does not mean that the source is legitimate, correctly cited, or even real. AI tools sometimes fabricate sources entirely.
Steps To Verify AI-Provided Sources:
- Click through and verify that the content exists and supports the proposition made in the output;
- Evaluate the source's credibility and timeliness; and
- Use reputable secondary sources to confirm key facts (e.g., verifying employment statistics via a reputable government source like the Bureau of Labor Statistics of the U.S. Department of Labor).
If no source is provided, the user should independently corroborate any claims with reliable sources before incorporating them into client-facing materials or internal documents. Failure to catch these inaccuracies at an early stage can lead to their proliferation through firm materials, compounding the problem.
AI notetakers and other summarization tools present similar concerns. These tools aim to capture the essence of a conversation but may misattribute speakers, misinterpret tone, drop key context, or emphasize the wrong points.
Best Practices For Reviewing AI-Generated Meeting Notes:
- Always retain the full transcript and audio file of the meeting.
- Require a designated reviewer – ideally someone who attended the meeting – to scan the summary for errors or omissions.
- Correct or supplement the AI-generated notes with clarifying language as needed.
- Clearly label summaries as machine-generated and subject to verification when stored or shared internally.
Some RIAs also insert human-reviewed versions of notes into their CRM system while archiving the original transcript and AI output for compliance and quality assurance purposes.
A second layer of review may be particularly valuable for any materials incorporating AI-generated content intended for broad distribution, such as marketing materials.
Another best practice to improve reliability is to maintain a Record of Prompts and Outputs, which is a saved log of what was typed into the AI tool (the prompts) and the responses that were generated (the output). This simple practice can help firms keep a clear trail of how AI was used and how the information may have influenced decisions or communications.
Benefits Of Keeping A Record Of Prompts And Outputs:
- Enables retrospective review in the event of a complaint or regulatory inquiry
- Provides a feedback loop to improve prompt quality over time
- Permits managers to identify usage patterns that may suggest over-reliance or improper applications
- Allows for refinement of prompts and consistency across the firm
Some AI platforms offer built-in retention of prompts and outputs, particularly for enterprise accounts. If the tool does not offer this feature, browser extensions or secure note-taking apps can be used to document interactions.
Detecting And Mitigating Bias In AI Tools
Artificial intelligence is often described as impartial. It has no emotions, doesn't play favorites, and processes inputs with mechanical consistency. In practice, however, AI is only as neutral as the data it's trained on and the prompts it's given. For RIAs whose fiduciary duty requires them to prioritize clients' interests ahead of their own, the use of biased AI outputs is a genuine regulatory risk with potentially severe consequences.
While much of the current discussion around bias in AI revolves around societal and demographic fairness (e.g., race, gender, or culture) there's another, less-discussed but highly relevant form of bias relevant to RIAs: bias that introduces or conceals conflicts of interest in financial advice. For example, if an AI tool generates output that favors proprietary products, underrepresents alternatives, or frames investment options in ways that increase RIA compensation, the RIA may face regulatory liability.
This type of bias can appear subtle. It may be embedded in the language used by an LLM or introduced through the prompt used, or it may show up in how the output is structured. If left undetected, biased output can expose a firm to regulatory scrutiny and undermine client trust.
Part of the challenge of understanding how bias manifests in AI tools stems from how LLMs are trained. These models absorb patterns from vast quantities of text that were not curated with regulatory considerations in mind. Training data can reflect the perspectives, values, and marketing content of a wide range of contributors, including financial firms, product issuers, asset managers, journalists, and industry commentators. Which results in output that may include assumptions that are untested or one-sided, such as promoting an asset class favored by large asset managers or index providers. It may overemphasize certain asset classes, underplay others, or recommend strategies based on popularity rather than client-centric needs.
Even if a model's training data were completely neutral, bias could still be introduced through how the tool is prompted by the user. An RIA who subconsciously frames a prompt in a way that assumes a particular product is preferable may elicit a response that simply echoes that assumption. Over time, this can compound and lead to even more bias. A firm may unintentionally develop internal workflows that favor a certain type of product, strategy, or vendor, not because it is best for the client, but because the tool has become a mirror for the firm's own biases. Failing to detect bias in the use of AI tools can trigger a potential breach of an RIA's fiduciary duties of loyalty and/or care owed to its clients.
Fiduciary Duties Of Care And Loyalty
The duty of loyalty requires an RIA to put the client's interests ahead of its own and to avoid or fully disclose any conflicts of interest that could influence the advice being given. When an RIA uses AI tools to support investment recommendations or financial planning, the duty of loyalty can be compromised if those tools produce biased outputs, especially if the RIA fails to detect and address that bias.
For example, if an AI tool consistently recommends investment products that pay the RIA higher fees – or produces biased results that disadvantage certain clients based on demographics, account size, or investment preferences – and the RIA either knows of the issue or fails to investigate, the firm may be breaching the duty of loyalty. In such cases, the RIA has effectively allowed a third-party tool, without proper controls and oversight, to influence advice in a way that benefits the RIA at the client's expense.
The duty of care, meanwhile, obligates advisers to provide advice that is accurate, informed, and based on reasonable investigation and diligence. Advisers are also required to understand the tools and methodologies they use to make investment recommendations or decisions.
For example, if the AI tool relies on biased training data that favors certain asset classes, demographics, or financial behaviors, it may offer skewed recommendations that are unsuitable for certain clients. Because the RIA is ultimately responsible for the advice given, blind reliance on AI outputs without meaningful oversight or validation would constitute a failure to exercise the diligence and care expected of an RIA. That failure can harm clients and expose the RIA to regulatory liability for breaching the duty of care.
In practice, the breach of an RIA's fiduciary duty of care or loyalty in the use of biased AI tools can lead to violations of the Advisers Act's anti-fraud provisions, as discussed in the previous section, even if the RIA did not act intentionally or with knowledge of the violation.
Best Practices For Managing The Risk Of AI Bias
There are several best practices RIA can utilize to detect potential bias in the use of AI tools. The first of these is learning what types of signals to look for.
Recognizing Signals That May Indicate Bias
While none of the following signals alone confirms bias, their presence should prompt further review. Even subtle language patterns can reflect embedded assumptions that, over time, skew the quality and impartiality of advice.
- Over-representation of proprietary products. Does the tool routinely recommend products that are managed by or affiliated with the RIA, even when better alternatives exist?
- Limited comparison of options. Are suggestions framed in a way that minimizes competing solutions or ignores lower-cost approaches?
- Overconfident recommendations. Are investment ideas offered without appropriate context, such as risk disclosures or a clear client suitability framework?
- Repeating sales language. Does the language of the output mirror marketing copy more than neutral financial commentary?
- Absence of client-specific nuance. Are strategies recommended without accounting for specific client circumstances, such as time horizon, liquidity needs, or tax sensitivity?
Training Staff To Recognize Bias
Just as employees can be trained in prompt engineering to elicit more accurate answers, they should also be trained to recognize how their own framing of queries can introduce bias. For example, consider the following prompt, which would inevitably lead the tool to construct an answer favoring that fund:
Explain why our proprietary balanced fund is a good fit for retirees.
A better prompt would encourage more objective outputs, and might be worded this way:
Compare the pros and cons of a proprietary balanced fund and an ETF portfolio for a risk-averse retiree.
Firms can also conduct workshops using real-world examples to help their teams see how small changes in language can lead to dramatically different outputs.
Implementing Oversight To Address Potential Conflicts
When AI-generated output recommends proprietary products or relationships that result in increased compensation to the firm, additional controls can help mitigate potential conflicts of interest. One approach is to insert automated prompts reminding users to add disclosures when necessary. Another option is to require an additional layer of compliance review to assess the output for bias and to provide appropriate conflict of interest disclosures.
In addition to these front-line controls, firms can also conduct periodic audits of AI-generated output, similar to existing reviews of financial models or CRM notes. These audits can help evaluate for signs of embedded bias, particularly in outputs that involve products where the firm has a financial interest. These audits can be conducted at a frequency based on the relative use of a tool and should include a mix of random sampling of relevant outputs. Such audits should be documented and reviewed by qualified personnel. If patterns are found, the firm can take corrective steps, whether by revising prompts, selecting different tools, or adjusting internal workflows.
Evaluating Tools Through Comparison And Transparency
Comparing outputs from multiple AI tools is another effective way to identify bias. If two or more tools arrive at different recommendations from the same prompt, that divergence may signal that further analysis is needed. This is particularly important for use cases involving investment due diligence or recommendations.
Where using multiple tools is impractical, RIAs can still compare AI-generated content against trusted internal sources such as legacy research or external databases to detect any significant inconsistencies.
Finally, tool transparency is critical to mitigate the risk of bias. RIAs should avoid AI tools that lack visibility into how they operate – including information about their training data, model behavior, and use of user prompts. Such a lack of visibility makes it extremely difficult to evaluate whether an AI tool's outputs is introducing bias into firm workflows.
Retaining Records From AI Use
As RIAs embrace AI-powered tools like ChatGPT and AI notetakers, they must ask if records related to the use of such AI tools must be retained pursuant to Rule 204-2 of the Advisers Act, often referred to as the "Books and Records Retention Rule".
The Books And Records Retention Rule
If a firm relies on AI to generate client correspondence, summarize meetings, or even draft internal memos that form the basis for investment advice, those records may fall squarely within the coverage of the Books and Records Retention Rule. This rule requires RIAs to maintain a significant amount of information and documents pertaining to their business including:
- Communications sent to clients or received from clients, including recommendations and advice
- Records of transactions and written agreements related to advisory services
- Documents supporting the basis of investment advice given to clients
- Internal communications that relate to the firm's recommendations or investment strategies
- Advertisements and other marketing materials
With some exceptions, these records must generally be maintained for five years from the end of the fiscal year in which they were last modified – with records from the first two years kept in an easily accessible place – and stored in a manner that is secure, organized, and available to the SEC upon request.
While the Books and Records Rule was written long before conversational personal assistants and AI notetakers existed, it nonetheless applies to records that fall into the above categories when generated through the use of AI tools.
Importantly, not every use of AI will trigger a recordkeeping requirement. For example, a junior employee using ChatGPT to brainstorm a blog post or rephrase internal talking points probably wouldn't be creating a record that needs to be retained. However, as soon as AI-generated content is utilized to create client- or public-facing communications, draft investment recommendations, summarize a meeting, or inform an internal analysis that leads to a decision, it likely requires retention.
Some scenarios that may trigger Rule 204-2 retention obligations include:
- Using an AI notetaker to generate summaries of client meetings, especially when those summaries are relied upon to inform recommendations or follow-up actions
- Drafting a client letter or financial plan summary using an AI tool
- Relying on AI-generated research or commentary when making investment decisions
- Using AI to prepare content distributed as part of a marketing campaign or newsletter
In these scenarios, what matters is not who or what created the content, but how it is used. The SEC has made clear that its focus is on the substance and use of records, not the method by which they were created. However, the increasing popularity of AI tools creates a real risk that records supporting advice and analyses are not retained.
For instance, if an employee fails to retain AI-generated content that underpins an investment recommendation, the RIA may face challenges in evaluating the quality of the analysis and understanding how the investment recommendation was formulated. This could result in a breach of the RIA's fiduciary duty of care as well as regulatory violations relating not only to the Books and Records Retention Rule, but also to other substantive provisions of the Advisers Act.
In an SEC exam, the inability to produce records that support recommendations, communications, or decisions can result in deficiencies or, worse, referrals to the Division of Enforcement.
Best Practices For AI Recordkeeping
RIAs adopting the use of AI tools must integrate such content into their broader recordkeeping infrastructure. For many firms, this requires rethinking storage access, documentation processes, and vendor selection.
To ensure record accessibility, RIAs should flatly prohibit firm personnel from using personal accounts when accessing AI tools for business purposes. Content generated through personal accounts may not be accessible to the RIA.
Additionally, RIAs should lean toward AI tools with robust archiving capabilities. As explained above, many AI tools now offer enterprise versions specifically designed for business users. These versions often include features that are critical for compliance, such as the ability to log, store, and export prompts and responses; maintain audit trails; and segregate user data.
When selecting an AI vendor, firms should target platforms that allow:
- Centralized control over employee access;
- Searchable, exportable logs of all user interactions; and
- Integration with existing archiving or document management systems.
These features help ensure that records are preserved in accordance with regulatory obligations. Firms should strongly consider avoiding consumer-grade tools that lack these controls.
If an AI tool doesn't support automatic archiving, firms should take steps to retain outputs manually. For example:
- Save both the prompt and response into a file or CRM system when using ChatGPT to draft client communications.
- Store transcripts and summaries of client meetings generated by AI notetakers in a secure location accessible for compliance review.
- Retain internal research or analysis generated by AI tools with other documentation supporting recommendations.
Even when the AI content is later edited or paraphrased, preserving the original output helps show the evolution of the material and protects the firm from allegations of unsubstantiated outputs.
And finally, RIAs should adopt policies and procedures specifically delineating what types of records must be retained when the firm utilizes AI tools, as well as additional controls designed to spot check the firm's books and records to ensure that required records are being appropriately retained.
The emergence of artificial intelligence tools such as ChatGPT and AI notetakers is transforming the way RIAs communicate, analyze data, and manage workflows. These technologies offer enormous potential for operational efficiency and enhanced client service support, but they also raise critical regulatory risks that must be addressed thoughtfully and proactively.
By developing internal policies, training staff, and maintaining robust oversight, RIAs can integrate AI into their practices in a way that strengthens compliance and safeguards client trust, positioning the firm to lead with innovation and integrity!