Enjoy the current installment of “weekend reading for financial planners” – this week’s edition kicks off with the news that the CFP Board has decided to issue a second draft of its proposed changes to the CFP Standards of Conduct, which is expected to be released in mid-December with a comment period running in January of 2018. Also in the news this week was a new report from PIABA that points out 5 out of the 13 “public” members of FINRA’s Board of Governors (who are supposed to represent consumers and have no ties to the industry) actually do have substantive ties to the financial services industry, which raises the significant conflict-of-interest concern that the majority of FINRA’s Board is actually comprised of industry representatives (whereas it is supposed to maintain 13-out-of-24 board seats with non-industry public governors).
From there, we have a number of articles on mergers and acquisitions of advisory firms, including a discussion of some of the key factors that drive an advisory firm’s valuation (beyond just its size and revenue), what an “earn-out” is and why it’s often used as part of the terms of an advisory firm acquisition, and how the ongoing wave of industry consolidation is likely to progress in the coming years as a small subset of mega-firms begin to emerge.
We also have several investment-related articles, from a discussion of how the “p-hacking” phenomenon (of doing so many tests for statistical significance that, ironically, it becomes inevitable that there will be a large number of false positives) may have inappropriate spurred the “investment anomalies” research, a look at the rapid rise and fall of many ETFs (which also appears to be tied to how many ETFs are being launched on spurious anomalies research), and an explanation of how Research Affiliates formulates its own forward-looking return assumptions (important for both portfolio design, and determining reasonable investment assumptions to plug into financial planning software).
We wrap up with three interesting articles, all focused around the theme of financial services industry change: the first raises the question of whether, as the Broker Protocol unravels, if it’s time for the SEC to step in and formally codify a version of the Broker Protocol for all firms (broker-dealer and RIA) under Regulation S-P; the second looks at whether it’s time to ditch life insurance illustrations and instead demand and require that insurance companies provide more detailed projections (which more clearly explain what are guaranteed costs and what are uncertain assumptions that could be manipulated); and the last suggests that perhaps the best way to fix the compliance woes at broker-dealers is for independent B/Ds to become more stringent on their hiring processes in the first place, vetting prospective brokers and weeding out the bad ones, so that they don’t have to create compliance procedures that cater to such a low level “lowest common denominator” broker (to the detriment of all the rest of the good and honest advisors at the broker-dealer). Which is especially important in today’s environment of broker-dealer consolidation, where large swaths of brokers – good and bad – may be getting absorbed into other broker-dealers in the coming years!
Enjoy the “light” reading!
Weekend reading for November 18th/19th:
CFP Board To Proposed New Revisions To Conduct Standards With Second Comment Period (Chelsea Emery, Financial Planning) – After taking in feedback from more than 1,300 public comments, plus 8 public forums with CFP certificants, the CFP Board has announced that it will be releasing a second draft version of its proposed changes with a second comment period. The decision to issue a second draft proposal isn’t entirely surprising, as both advocacy organizations like the FPA, as well as the comment letter submitted by yours truly, suggested that there were enough substantive changes needed in the first draft to merit a revised proposal with a second comment period. The second draft itself is anticipated to be released before December 25th, with the second comment period to open for 30 days running from January 2nd to February 2nd; the CFP Board has indicated that it aims to complete the process and publish the final conduct rules by the end of the 1st quarter of 2018, with the new standards to take effect at the beginning of 2019. Thus far, the CFP Board has not indicated which areas in particular will be modified, beyond noting that “the board really looked hard at the practicalities of how the proposed standards would apply to different business models”, an apparent nod to the challenges of how the CFP Board’s fiduciary requirements – including new notification and disclosure documents – might apply in the broker-dealer community in particular. Though in public comments at this week’s Schwab IMPACT conference, CFP Board CEO Blaine Aikin maintained that the second draft will have “meaningful” changes but won’t dilute the principles-based fiduciary focus of the rules.
FINRA Board Rife With Conflicts Of Interest (Ann Marsh, Financial Planning) – A new report issued on Wednesday by the Public Investors Arbitration Bar Association (PIABA) is raising concerns about the structure of FINRA’s public Board of Governors, noting that of the 24 members, 10 are supposed to be from the “industry” while 13 are designated for public members intended to represent investors (with the 24th seat for FINRA’s CEO), but in practice, five of the 13 public members are actually tied directly to financial services firms as well, which means the majority of the board represents the industry instead of consumers. The PIABA report specifically calls out FINRA Board Chairman William Heyman – who is also the Chief Investment Officer of Travelers – along with Eileen Murray (who is co-CEO of hedge fund Bridgewater Associates), Shelley Lazarus (also on the board of Blackstone Group, which has multiple broker-dealer subsidiaries), Carol Davidson (also on the board of Legg Mason), and Joshua Levine (a co-founder of single family office Kita Capital Management). In addition, PIABA also notes that many of FINRA’s public board members may not be able to realistically fulfill their duties in a crisis, due to being “over-boarded” – i.e., participating on so many different boards that they couldn’t realistically put in the necessary time, as several of the public board members in question are sitting on 5+ other boards simultaneously (in addition to often having other work duties as well). Notably, though, the PIABA report praised new FINRA CEO Robert Cook, as it was his decision to release further detail about FINRA’s public governors that allowed these concerns to be surfaced in the first place.
How To Make An RIA Attractive To Buyers (Charles Paikert, Financial Planning) – While most advisory firm owners focus on sheer size and revenue to grow the value of the business (given the popular valuation rule of thumb of 2X revenue), the reality is that two firms with identical revenue (and AUM) can still have substantially different valuations based on the underlying characteristics of the business and how it operates. First and foremost, this is because buyers ultimately focus on profits and free cash flow, not just revenue – as the popular 2X revenue valuation rule is really just an easy way to translate an 8X free cash flow multiple with a ‘typical’ advisory firm profit margin of 25%. Which means the advisory firm’s margins (the percentage of gross revenue that comes out as profits at the end of the day) is really what matters most, and firms that run inefficiently and less profitably will command a lower valuation. In addition, the sustainability of the cash flow, and the firm’s ability to sustain organic growth, also matter greatly to buyers, with the firms that have the fastest (sustainable) growth commanding valuations as high as 9 to 11X free cash flow. Conversely, firms where the clients are still reliant on the founding principal – and thus are at risk to leave if/when the founder retires – will receive lower valuations (even for an equivalent revenue and profit margin), as will firms that provide different customized portfolios to every client (because ultimately individually customized portfolios simply aren’t as scalable and efficient for the business owner). Although ultimately, different types of buyers will seek out different types of practices, as “financial buyers” focus most on growth and cash flows (and don’t want firms where principals are retiring), while “strategic acquirers” tend to want larger firms that bring immediate scale, and “peer-to-peer” buyers may be the most flexible about buying out retiring owners but are most likely to focus on having a good cultural fit (to maximize client retention).
An Example Of Structuring Earn-Outs For RIAs (Mercer Capital) – One of the fundamental challenges of any business acquisition is the risk that the clients and revenue – and therefore, the profits – don’t sustain after the purchase closes, and the risk is especially acute in the world of financial advisors, where it’s hard to know whether or how many clients will really retain in the transition, and RIAs have few other “tangible” assets (unlike buying a business with factories and machinery). And the gap between what the seller wants and what the buyer will pay can become even more extreme once projected growth is taken into account, as ongoing growth can rapidly increase the value (and bottom line profits) of a growing advisory firm, but buyers are often wary to pay in advance for growth that may or may not actually materialize after a change in ownership. The solution to this for many acquirers is to structure at least part of the purchase price as an “earn-out”, where the seller receives a portion of the ongoing revenue or profits after the deal closes for a period of years… which incentivizes the seller to do their best to ensure that clients really do transition, as that’s what maximizes the value for the seller. For instance, a prospective earn-out deal might stipulate that the seller has to stick around for 3 years to manage the transition, and during that period the seller will be paid 5X for any profits that occur above-and-beyond the baseline level of profits at the time the deal closes (which, notably, may be less than the 8X profits that are purchased at the time the deal closes, but at least gives the seller some substantial ongoing incentive to sustain the growth during the transition period). Notably, the payment terms of an earn-out are often substantial, which also means it’s permissible to attach substantive non-compete/non-solicit and other terms to the agreement – as the seller is now being “paid well” in exchange for those restrictive covenants that may be appealing to the buyer.
Advisor M&A: Who Will Be The Last Firm Standing (Mark Tibergien, Investment Advisor) – The volume of mergers and acquisitions amongst advisory firms has continued to rise steadily every year this decade, as more and more financial advisors reach an age where they’re ready to retire, and/or simply because their business has grown to the point that they don’t want to actually have to “manage” the business anymore and just want to get back to working with clients. And the rise in sellers is being met by a rise in interested buyers as well, from “roll-up” firms (that take private equity dollars to invest and buy advisory firms, aiming to eventually sell the aggregated whole for more than the sum of the parts), to “strategic consolidators” (that aren’t necessarily buying firms to re-sell the enterprise in a few years, but simply want to acquire firms to grow more quickly and achieve economies of scale). The challenge, however, is that these kinds of consolidation trends are much harder in “people-dependent” businesses like financial advisory firms, compared to more process- or brand-dependent businesses (e.g., funeral homes or real estate agencies or even medical practices). In fact, in the famous Harvard Business Review article “The Consolidation Curve” by Graeme Deans, Fritz Kroeger, and Stefan Zeisel, it was observed that an industry’s consolidation process typically takes about 25 years (and while independent financial advisors have been around that long, the consolidation wave really only began about 10 years ago). The first wave is the rise of professionally managed firms (which has certainly happened amongst financial advisors as a subset of firms have transitioned from lifestyle practices to standalone businesses); the second is when major buyers begin to emerge to consolidate the leaders (as the biggest competitors are often the most appealing to buy for rapid scale), ultimately growing as much as 15% to 45% of market share; in phase 3, a handful of leaders emerge in a “Focus Phase” where they swing back to trying to expand their core business organically and outgrow (what’s left of) the competition; and then ultimately the final 4th stage is “Balance and Alliance”, where a small subset of mega firms reign (in traditional brokerage, this would be firms like Goldman Sachs, Merrill Lynch, JP Morgan, and Morgan Stanley), and firms have to form alliances to strategically carve up the limited growth that’s left. In this context, it would appear that the advisory industry is somewhere in stage 2 so far, as the leading acquirers try to grow to a large enough size to capture a material portion of total market share, implying that there is still a great deal of M&A activity left to come!
The Trend That Is Ruining Finance Research (Michael Edesess, Advisor Perspectives) – The foundation of the recent rise in “quantitative asset management” (i.e., “quant” strategies) is the research in recent decades on various investment “anomalies” (market phenomena that “shouldn’t” happen in efficient markets, but do), that raise the potential for alpha by exploiting those now-known anomalies, and has spurred investment funds to hire reams of quants to try to spot (and then exploit) new market anomalies as well. The anomalies research in the investment realm was arguably formalized by the work of Fama and French, who first identified the Value and Small-Cap anomalies, though ultimately the approach is predicated on the original work of Ronald Fisher from nearly 100 years ago, who first came up with the methodology to determine whether differences are “statistically significant” and not likely to be the result of mere chance (or what we now often know as testing for a “p-value” below a certain threshold like 0.05). The problem, however, is that while this is a reasonable way to test any particular anomaly or scientific investigation, the methodology doesn’t scale when adopted widely. After all, the whole point of the statistically significant threshold of 0.05 – or 5% – is that 5% of the time, the result actually will be due to random chance and not an actual difference (or anomaly). The 5% threshold – in essence, a 5% false-positive rate – seemed reasonable enough when applied to just one study. But when 1,000+ quants do 1,000+ anomalies studies all at once, it would be expected to find 50+ false positives, or anomalies that are “statistically significant” but really are nothing more than random chance. Notably, this challenge that testing for 0.05 p-values is problematic when applied repeatedly by lots of researchers, isn’t unique to finance – the phenomenon of “p-hacking” has become a concern in a wide range of research areas, where replications of prior statistically significant studies are failed to reproduce the original statistically significant results (because they were just random chance in the first place). And the situation is made worse in finance in particular, where it is common to use (least-squares) linear regressions, that end up being unduly influenced by a small subset of outlier data points (which may have just been random chance). The fundamental point: beware the reliability of the next quant investment manager that says they have found a statistically significant anomaly to exploit, as if you talk to enough quant managers, it’s a virtual certain you’ll find a few statistically significant results that were ultimately nothing more than random chance.
Say Hello To FAIL, My New ETF (Allan Roth, Financial Planning) – The “typical” process for how new ETFs are launched is that an investment manager crunches the data on thousands of potential factors that may be driving performance (which is easier and easier in a world of big data and powerful computers), identify the factors that led to superior investment results, craft an explanation of why and how those factors led to superior performance, and then launch the fund for others to take advantage of the factor. Yet while this increasingly popular process drove a whopping 247 new ETFs (and ETNs) in 2016… it also led to a whopping 128 ETFs and ETNs that closed as well. The problem? When enough prospective ETF managers test enough factors, it’s a virtual certainty that at least a few will find “anomalous” results simply due to random chance… except ultimately, it really is nothing more than random chance, and the subsequent regression towards the mean causes the future performance of the actual ETF to fail to live up to the back-tested performance that identified the factor in the first place. Writ large, Morningstar has found that the majority of strategic-beta ETFs have failed to deliver in the past 1-3 years. Notably, this doesn’t necessarily mean that it’s impossible to identify factors that might really lead to better results, but at a minimum, the proper process should really be to start with a reasonable theory about why a certain strategy should outperform, and then test to see if it’s actually valid (and the risk that it regresses to the mean in the future). Because that’s the only way you can avoid quants who simply p-hack their way to a “statistically significant” factor or other anomaly (as false positives are inevitable if enough factors are tested) that won’t likely hold up in the future.
How To Compare ETFs When Cost No Longer Matters (Ryan Vlastelica, MarketWatch) – For years, the easiest way to differentiate between relatively similar ETFs was the fact that some were much less expensive than others, and choosing the cheaper of otherwise-similar ETFs was an easy win for clients. However, as the ETF wars bring costs lower and lower, there’s also less and less room for one ETF to be materially cheaper than another in the first place, which raises the question of how else to compare ETFs and what other factors should be germane. The issue came to a head this month as Vanguard launched a new Corporate Bond ETF (VTC) with a 0.07% expense ratio, which is cheaper than its primary competitor, the iShares iBoxx Corporate Bond ETF (LQD) at a 0.15% expense ratio. After all, while the Vanguard ETF is barely half the price of the iShares ETF, the cost different amounts to a savings of just $8 for every $10,000 invested, which may take a second chair to other factors. For instance, the sheer size of LQD – at nearly $40B of assets – means that it trades an average of 7 million shares daily, with a $0.01 bid/ask spread, which the newer Vanguard fund may have trouble replicating (at least in the near term), and is important because bouts of wider bid/ask spreads can more than overwhelm the cost savings of a few basis points in expense ratio (as was recently highlighted in the TD Ameritrade controversy when it replaced popular and high-volume Vanguard ETFs with newer and lower-volume SPDR Core shares). More generally, other factors like investment methodology for executing the fund, tracking error, tax efficiency, and the trust of the asset manager’s brand, will also likely take more of a position at the forefront, as the cost differences become smaller and smaller as ETFs approach the zero bound on expense ratio costs.
Pricing Stocks & Bonds For Future Return Assumptions (Jim Masturzo, Research Affiliates) – Formulating forward-looking return expectations is a necessary reality for financial advisors. Whether to determine an appropriate asset allocation based on those capital market assumptions, or simply to determine reasonable rates of return to project to provide a recommendation regarding the appropriate amounts to save (for accumulators) or spend (for retirees), it’s necessary to formulate some view of future market returns. Notably, even Research Affiliates emphasizes that this process isn’t necessarily about trying to “foretell what the future holds”, per se, but simply to formulate a baseline of “if everything happens the way it’s normally expected, what expected returns are realistic” (recognizing that actual results will vary from that expectation because of unexpected shocks that can’t be fully anticipated). In the case of bonds, this is often quite straightforward, because the bond itself has contractual payments of interest and principal over a time horizon, which means as long as the bond itself doesn’t default, the expected return can be identified. And in fact, even when bonds are rolled, the yield on a 10-year Treasury predicts 75% of the actual total return of (rolling) 10-year bonds over a 10-year time horizon. With other investments (e.g., equities), the process is more complex, but the principle is still the same. Research Affiliates finds that 15% of 10-year returns in equities can be explained by dividend yields (analogous to the interest yield on bonds), though the rate of real (dividend) growth, and changes in P/E multiples, also account for a substantial portion of long-term equity returns (which means it’s necessary to come up with assumptions for each to project forward). For advisors with further interest, Research Affiliates also provides an interactive tool to view their current (real) return expectations and volatility based on these models.
To ‘Fix’ The Broker Protocol, Bring In The SEC (Sharron Ash, Investment News) – The foundation of the Protocol on Broker Recruiting was two-fold: to support client freedom of choice, and to reduce the volume of (bi-directional) lawsuits as brokers change from one firm to another, by making it clear what information brokers could (and could not) take with them when switching firms. Yet as one of the Broker Protocol’s original founders has noted, today’s environment is substantively different than when the Protocol started. At the time, NY Attorney General Elliott Spitzer was leveraging New York’s criminal laws to challenge questionable business practices, and so a major aspect of establishing the Broker Protocol was to reduce this legal exposure (as there was fear that getting access to another firm’s Client List could be deemed criminal “Economic Espionage” for stealing trade secrets), even as the founding wirehouses also likely hoped that the Broker Protocol would help to grease the recruiting wheels to allow them to attract in more brokers from competitors than they might be losing (though obviously not all of them could win simultaneously). Instead, though, the problem of the past decade is that all the wirehouses combined have seen net outflows of brokers to the independent channels, even as the reformulation of wirehouses after the financial crisis has increasingly homogenized them and limited their own ability to differentiate from each other. And thus, it’s perhaps not entirely surprising that Morgan Stanley announced last month that it was dropping out of the Protocol, in what many believe will ultimately lead to a total unraveling of the agreement, and at a minimum will likely lead to a rise in lawsuits for brokers leaving broker-dealers (as they drop from the Protocol), and potential issues under the SEC’s Regulation S-P about prospective breaches of client privacy. Which raises the question – is it time for the SEC to step in, and perhaps as a part of the rest of its “harmonization” efforts, codify its own regulatory version of the Broker Protocol, valid for all movements between broker-dealers and from the broker-dealer to RIA channels… especially since, when brokers cannot change firms for the benefit of their clients (or client transfers are delayed as broker-dealers trying to hold onto the money), it’s ultimately the clients that may be most harmed in the process?
It’s Time For The Life Insurance Industry To Ditch Illustration Comparisons (Barry Flagg, Investment News) – In order to purchase a permanent life insurance policy, prospective buyers need some understanding of how it will likely perform in the future, and existing policyowners need an opportunity to project how their existing policies are performing and whether they are on track. Yet Flagg notes that in practice, because life insurance illustrations are one of the only points of reference that prospective buyers or existing policyowners have, there is a substantial incentive for insurance companies to game the decision-making process. As a result, hypothetical illustration and projected policy values risk having their assumptions manipulated (by the insurer, the distributor, or the agent/broker), culminating in overly long projections with thousands of calculations and copious “fine print” footnotes (which means clients can’t likely take it all in, and then end up making under-informed decisions), which in turn leads to a risk of client disappointment or outright client complaints (if the actual results don’t match the hypothetical illustrations and comparisons). Though perhaps the biggest issue is simply that the complexity of life insurance policy illustrations put so much burden on the client that they have to “work too hard”, which undermines trust, hinders the advisor-client relationship, and can damage relationships with key Centers of Influence as well (who learn not to trust life insurance policy illustrations). So what’s the alternative? Flagg suggests that all insurers should be required to provide their “detailed expense pages” (which provide a year-by-year schedule of all policy expenses), to truly be able to evaluate and compare policies (and if better disclosed, would be easier to compare and benchmark against peer-group alternatives to identify truly competitive and high-quality products).
IBDs Can Protect Good Advisers From Bad Recruits (Steve Youhn, Investment News) – A major pain point in many broker-dealers is the rise of FINRA fines for the inappropriate actions of problem brokers, which ultimately eats up available capital that might have otherwise been reinvested in the platform for the benefit of all the other good advisors. And at best, compliance departments institute processes and procedures that try to limit the exposure of the broker-dealer to such problem behavior… effectively creating a “lowest common denominator” approach to compliance that drags all of their good advisors down to the processes designed to prevent the one worst broker from doing something inappropriate. In other words, the bad few brokers are disproportionately gobbling up the resources for everyone else (in the form of compliance costs and regulatory fines). Which raises the question of whether independent broker-dealers should be doing more to police themselves and the brokers they recruit in the first place. Youhn suggests that broker-dealer compliance departments should instead be able to provide clearer vetting guidelines to the business development teams in the first place, establishing clear parameters of what brokers simply will not be permitted in the first place (e.g., by establishing a uniform cap on the number of compliance disclosures allowable for new recruits, or by deciding which particular disclosure events are “no fly zones” that will result in an absolute and automatic decline), and always conduct full background checks to verify that all debts and liens have been disclosed (as unfortunately, the U-4 is not always up to date, especially for brokers being recruited who have an incentive to hide information until after they get on board with their new firm). And the issue is especially important given the looming wave of broker-dealer consolidation, which means some B/Ds may soon be acquiring large chunks of (good and bad) brokers at once, and need to be mindful to separate the wheat from the chaff. Or stated more simply: perhaps broker-dealer compliance wouldn’t have to be so onerous for good advisors if the broker-dealer did a better job of vetting and weeding out the bad brokers in their midst in the first place?
I hope you enjoyed the reading! Please leave a comment below to share your thoughts, or make a suggestion of any articles you think I should highlight in a future column!
In the meantime, if you’re interested in more news and information regarding advisor technology, I’d highly recommend checking out Bill Winterberg’s “FPPad” blog on technology for advisors as well.