Add a bookmark to get started

Programming_Code_S_2309
10 August 20238 minute read

Proposed SEC AI regulations a prototype for Canada

Combatting bias, discrimination, and conflicts of interest in AI-enhanced investment decisions and ‎investor interactions

In an interview with The New York Times published on August 7, 2023, United States Securities and Exchange Commission (“SEC”)  Chair Gary Gensler commented on the risk AI poses to the financial system. He worried that the financial system would become reliant on a small number of foundational models that could increase the risk of financial crashes due to “herding” — a phenomenon where investors would all be relying on the same or very similar information and predictive modelling. These comments are part of a push by the SEC to regulate AI technologies before these risks manifest in the system.

On April 6, 2023, the Investor Advisory Committee wrote a letter to SEC Chair Gary Gensler on the “Establishment of an Ethical Artificial Intelligence Framework for Investment Advisors”. In the letter the IAC warned that “a significant number of investment advisory firms utilize computer code for determining appropriate asset allocation recommendations for their clients. Whether the asset allocation advice is communicated via digital engagement or via a human, it is imperative these programs are tested for bias and discrimination.” The IAC called on the SEC to establish clear ethical frameworks around the use of automated systems for businesses regulated by the SEC. This would include expanding guidance on risk-based reviews of the use of artificial intelligence under Rule 206(4)-7 to include compliance with three key tenets: (1) Equity; (2) Consistent and Persistent Testing; and (3) Governance and Oversight. These tenets would form the basis of rules that establish protections against bias and discrimination within algorithms employed by regulated entities. The SEC is currently developing a framework to address these risks.

On July 26, 2023, the SEC released proposed rules to address perceived issues with conflicts of interest created by the use of AI in securities dealings. In particular, the new rules seek to eliminate, or neutralize the effect of, certain conflicts of interest associated with broker-dealers’ or investment advisers’ interactions with investors through these firms’ use of technologies that optimize for, predict, guide, forecast, or direct investment-related behaviours or outcomes.

Evaluating, testing, and documenting AI technologies

The proposed rules impose a prescriptive process for evaluating, testing, and documenting a firm’s use of certain AI technologies, which the rule defines as any “analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviours or outcomes.” The proposed definition is designed to capture PDA-like technologies, such as AI, machine learning, or deep learning algorithms, neural networks, NLP, or large language models (including generative pre-trained transformers), as well as other technologies that make use of historical or real-time data, lookup tables, or correlation matrices among others, and is intended to be forward looking to capture new technologies as they develop. The rule, however, is focused more narrowly on the use of this technology to predict, guide or forecast investment-related behaviours. This could include providing investment advice or recommendations, but it also encompasses design elements, features, or communications that nudge, prompt, cue, solicit, or influence investment related behaviors or outcomes from investors.

The SEC proposal elaborates further that “The proposed definition would apply to the use of PDA-like technologies that analyze investors’ behaviors (e.g., spending patterns, browsing history on the firm’s website, updates on social media) to proactively provide curated research reports on particular investment products, because the use of such technology has been shown to guide or influence investment-related behaviors or outcomes. Similarly, using algorithmic-based tools, such as investment analysis tools, to provide tailored investment recommendations to investors would fall under the proposed definition of covered technology because the use of such tools is directly intended to guide investment-related behavior.

As an additional example, a firm’s use of a conditional autoencoder model to predict stock returns would be a covered technology. Similarly, if a firm utilizes a spreadsheet that implements financial modeling tools or calculations, such as correlation matrices, algorithms, or other computational functions, to reflect historical correlations between economic business cycles and the market returns of certain asset classes in order to optimize asset allocation recommendations to investors, the model contained in that spreadsheet would be a covered technology because the use of such financial modeling tool is directly intended to guide investment-related behavior. Likewise, covered technology would include a commercial off-the-shelf NLP technology that a firm may license to draft or revise advertisements guiding or directing investors or prospective investors to use its services.”

Investor interaction

The proposed rule covers “investor interaction” and generally defines that interaction “as engaging or communicating with an investor, including by exercising discretion with respect to an investor’s account; providing information to an investor; or soliciting an investor.” This definition is intended to be broad and would capture, for example, “a behavioral feature on an online or digital platform that is meant to prompt, or has the effect of prompting, investors’ investment-related behaviors”, “an email from a broker recommending an investment product when the broker used PDA-like technology to generate the recommendation”, and the use of covered technologies to provide “individual brokers or advisers with customized insights into an investor’s needs and interests [which] the broker or adviser may use … to supplement their existing knowledge and expertise when making a suggestion to the investor during an in-person meeting.”

The rule would require firms to take affirmative steps to eliminate conflicts of interest in the use of these tools, to ensure investor interests are prioritized over the interests of the investment firm. This requires audit capabilities that can identify when algorithms may place the interests of the firm above those of the investor — in other words, firms must evaluate “reasonably foreseeable potential use” of such technologies by the firm to identify this risk. The proposed conflicts rules do not mandate a particular means by which a firm is required to evaluate its particular use or potential use of a covered technology or identify a conflict of interest associated with that use or potential use. Instead, the firm may adopt an approach that is appropriate for its particular use of covered technology, provided that its evaluation approach is sufficient for the firm to identify the conflicts of interest that are associated with how the technology has operated in the past. For example, a firm might instruct firm personnel with sufficient knowledge of both the applicable programming language and the firm’s regulatory obligations to review the source code of the technology, review documentation regarding how the technology works, and review the data considered by the covered technology (as well as how it is weighted). This may include the development of “explainability” features within the technology if the algorithms are particularly complex and opaque (such as in deep learning models).

Regular testing and evaluation

Next, the proposed rule requires that firms regularly test each covered technology both prior and subsequent to its implementation. Such testing may surface additional information that would not be apparent simply from reviewing the source code or documentation for the covered technology or the underlying data it uses.

The testing and evaluation must focus on whether a covered technology considers any firm-favorable information in an investor interaction or information favorable to a firm’s associated persons. If it does, the firm should evaluate the conflict and determine whether such consideration involves a conflict of interest that places the interest of the firm or its associated persons ahead of investors’ interests and, if so, how to eliminate, or neutralize the effect of, that conflict of interest. Some factors may have greater risk at prioritizing firm interests over those of investors than others — and this would have to be considered in the analysis.

What to expect in Canada

This brief overview of the proposed conflict rule is merely a summary of a single set of rules for one set of risks created by the deployment of AI in the investment industry. There is no doubt the SEC will promulgate additional rules covering other risks such as data-set flaws, negligent AI decisions, and bias impacting issues other than conflicts of interest. This work will undoubtedly influence regulators in other jurisdictions, including Canada.

In fact, Canadian regulators have already considered related issues (e.g. regulation of online advisors) and are considering AI-specific issues both in terms of firm created risk and in terms of AI-enhanced regulation. Firms should expect more specific AI guidance to be issued at some point in the future, perhaps modeled, at least in part, on the SEC proposed rules. Prudent firms will be assessing these risks now, rather than waiting for the inevitable regulations, as these are risks that can create liability under current laws as well as potential future risk under revised rules. Firms that do not have robust risk management, identification, evaluation and testing programs in place for the use of these technologies may find themselves on the wrong end of civil suits and regulatory enforcement in the coming months and years. 
Print