Could the SEC's UA Rule “Weaken” Advisers' Fiduciary Duty?


SEC proposed the new AI rule threatens to weaken advisers' fiduciary duty, according to a top attorney for the Investment Advisers Association.

The danger of the new rule is proposing a “whole new framework for dealing with conflicts” related to technology tools, said IAA general counsel Gail Bernstein. WealthManagement.com during the association's annual compliance conference this week.

“What's going to be very challenging is that everyone understands what the good faith framework means, and by creating a new rule that overlays something on top of that, I think they're potentially weakening the fiduciary duty,” she said. “It's almost like you're proposing a rule for the sake of proposing a rule, as opposed to 'Is there a gap and should we fill it?'

SEC officials oppose the proposed rule would limit conflicts of interest that arise when brokerage firms or asset managers use AI tools to make investment recommendations or trading decisions. SEC Chairman Gary Gensler has argued that investors desperately need the rule for a world where they can be micro-targeted with products and services.

However, the IAA argued that the solution to the problem was too broad. In an unusual move for the organization, the IAA recommended that the commission drop the rule.

A final version of the rule is expected to be released this spring.

In a conference call with Bernstein in his final week as director of the SEC's Division of Investment Management, William Birdthistle said regulators shouldn't wait until a crisis hits before responding.

“If anyone here is a parent, you don't wait until the child is on the way. You can act in advance if you see what's coming very well,” Birdthistle said. “Forecasting and forecasting is hard, and nobody gets it right all the time. But that's where I think the degree of risk is very visible.”

Bernstein countered that while the topic of generative AI was “scary” and needed careful risk governance, the current proposal falls far short.

Jennifer Klass, a partner with K&L Gates, echoed earlier concerns that the technology covered by the rule may extend beyond AI and large learning modules to well-established and long-established tools. Klass described the rule's definitions of covered technology as “broad enough to get trucks through” and that it was at the center of much industry criticism.

“All we really know from the definitions is that it's about 'investment-related behaviors or outcomes,' which, if you're an investment adviser, that's pretty much all you care about,” she said. “The concern was that a cloaked technology could be almost anything.”

Bernstein believed the SEC recognized the definitions were too broad and hoped they were thinking about how to make them “more rational.” However, even if the definitions were narrower, she said the IAA would still prefer that the SEC withdraw the rule.

“The question I asked William Birdthistle this morning was, 'What is it really about and what are you trying to do?' she said. “It is not clear that adjusting the definition will answer this question.”

Klass questioned whether the SEC needed a new rule specifically for AI in the first place, since the existing Advisers Act rules are media-neutral and an adviser's fiduciary duty clarifies what conflicts are and how they should deal with counselors.

“We keep coming back to this as a framework that has worked for decades for many different new technologies, and it's not clear why there are features of AI that make this existing framework unworkable,” she said. “What is so unique about AI that you can't apply the duty of trust?”

As evidence, Klass cited existing regulations and guidelines affecting advisers' use of AI, including their fiduciary duty, the 2017 staff guidance for robo-advisors and the marketing rule, among others.

Examiners are also examining the firms' AI-related disclosure and marketing procedures, as well as compliance and conflict policies and procedures. In her final week as deputy director of the IA/IC Examinations Program in the SEC's Division of Examinations, Natasha Vij Greiner noted that many advisers are “getting it wrong” when it comes to AI-related disclosures (Greiner will consequence Birdthistle in charge of the Investment Management Division).

Bernstein said that even if an SEC regulation focuses on current generative AI technology, they would like to see more analysis before proposing a rule. Instead, Bernstein believed they could support guidelines detailing the need for a principles-based risk governance framework.

“Our view is if it's about conflicts, you don't need a rule,” she said. “If you think counselors need to better understand how to think about conflicts with a particular boundary technology, consider giving guidance.”

Birdthistle acknowledged that if the commission backed down or changed the rule, the problem would remain. As proof, he cited the “conundrum” he faced after meeting with AI engineers about their products.

“I ask, 'How does it work?'” he said. “'Things go in, the 'box' does magic, things come out.' That is not a reassuring answer.”

But while some in the industry believed that disclosures help defuse situations like this, Birdwhistle had trouble imagining that disclosure alone could resolve the issue raised at that meeting.

“What are you discovering? You can't discover that, that the algorithm works in ways unknown to its engineers,” he said. “That doesn't sound like meaningful discovery.”



Source link