AI tools like ChatGPT and automated meeting notetakers offer meaningful time savings and productivity gains that can be a game-changer for supporting RIA marketing, client service, and investment research efforts. But the same capabilities that make AI so useful – namely, its ability to analyze and generate human-like content at scale – also introduce complex compliance and regulatory risks. In the absence of clear regulatory requirements for how to use these tools, RIAs have been largely left on their own to figure out how to address challenges associated with protecting client privacy, screening out inaccurate or biased data, and maintaining necessary books and records. However, this doesn't mean that AI tools must be avoided altogether. With appropriate due diligence and training, firms can benefit from the time-savings potential of AI while also managing associated compliance and security risks.
In this guest post, Rich Chen, founder of Brightstar Law Group, explores the key steps advisors can take to use AI productively and in alignment with regulatory requirements.
Mitigating risks starts with understanding how AI tools process, store, and secure user data. Tools that do not retain or train on user data are generally preferable, and firms can prioritize enterprise-grade solutions that offer configurable access controls and auditing features. Internal policies can further reduce risk by requiring pre-approval for AI tool use, training employees to avoid submitting sensitive client information, and using redaction tools to strip Nonpublic Personal Information (NPI) from prompts before submission.
Beyond these safeguards, firms can train employees on prompt engineering techniques to improve the relevance and accuracy of AI outputs. Formal review processes can also help catch hallucinations and factual errors – especially when AI-generated output is used in marketing content, investment research, or planning advice. Additionally, it's crucial to learn how to recognize and monitor for signs of bias in AI's output that may unintentionally influence advice or skew the tone of client-facing content. Because AI tools are trained on large and often uncurated datasets, their output can reflect common industry norms or marketing-driven assumptions. Ongoing audits and compliance reviews – especially for investment recommendations or public-facing content – can help firms detect and address biased or misleading information before it is proliferated.
Recordkeeping is another key compliance obligation. Under the SEC's Books and Records Retention Rule, RIAs must preserve documentation that supports client advice and decision-making, including AI-generated meeting notes, marketing content, and investment analyses. To stay compliant, firms should retain both the prompts and outputs from any material AI interactions, store meeting transcripts alongside summaries, and ensure archiving systems are structured in a way that allows for SEC retrieval. Some AI tools now support integrated archiving, making this process more scalable over time.
Ultimately, while AI tools offer transformative opportunities for increasing efficiency and scale, they also require a thoughtful approach to ensuring compliance with an RIA's fiduciary and other regulatory obligations. RIAs that invest in due diligence, training, and oversight can confidently harness the power of AI to enhance client service while maintaining the high standards of trust, care, and diligence that their clients and regulators expect.