Experts Warn New SEC AI Rule Contains a Fatal Flaw

Dec 18, 2025
Interview
Experts Warn New SEC AI Rule Contains a Fatal Flaw

In the rapidly evolving landscape of artificial intelligence, regulators are scrambling to keep pace. The US Securities and Exchange Commission’s new proposed rule on AI disclosure is the latest effort to bring transparency to a technology that is reshaping industries. To unpack what this means for corporate America, we sat down with Vernon Yai, a leading expert in data protection and governance. With a deep focus on risk management, he provides a critical perspective on how companies can navigate this new era of mandatory transparency, balancing regulatory demands with competitive realities.

Our conversation explored the practical challenges of this proposal, from the immense pressure it places on executive boards to formalize their AI strategies to the controversial decision to let companies define AI on their own terms. We delved into the risks of creating disclosures that either reveal too much proprietary information or devolve into meaningless boilerplate language. Furthermore, we discussed the near-impossible task of isolating AI’s true impact on a business and the looming threat of unsanctioned “shadow IT” AI tools that operate beyond the C-suite’s view.

The proposed SEC rule seems designed to push AI governance from the server room to the boardroom. For a C-suite team just starting this journey, what are the first practical steps in forming an AI governance committee, and how can they begin to define what makes an AI project “material” enough to warrant disclosure?

The first step is to accept that this can’t be just an IT or a legal problem; it has to be a cross-functional body. You need the Chief Information Security Officer, the Chief Legal Officer, and key business-line leaders at the table. It’s not about just reviewing code, it’s about reviewing business impact. As for defining “materiality,” it’s a risk calculation. A good starting point is to ask: does this AI system directly touch core operations that affect the company’s value, as Monica Washington Rothbaum pointed out? Think about hiring, customer service, or security. If an AI tool is making autonomous decisions in those areas, it’s almost certainly material. You should also look at the investment level—capital expenses and R&D expenditures are explicitly mentioned in the proposal. If you’re spending millions, that’s a clear signal to investors and regulators that you believe it’s material to your future success.

The requirement to disclose decisions not to use AI is fascinating, as it could be interpreted as a competitive weakness. Could you walk us through how a company might frame such a disclosure to shareholders in a way that builds confidence rather than causing alarm?

This is where the narrative becomes crucial. A company must frame this not as a deficiency, but as a deliberate and responsible strategic choice. Imagine a bank considering a new generative AI for its customer service chatbots. After a review, they decide against it. The disclosure shouldn’t just say, “We are not using generative AI.” Instead, it should be framed around risk management. For instance: “After a thorough evaluation of current generative AI technologies for customer-facing roles, our AI Governance Committee concluded that the technology does not yet meet our rigorous standards for data privacy and factual accuracy. Our priority remains protecting customer data and trust. Therefore, we will continue to invest in our existing, proven platforms while actively monitoring the maturation of next-generation AI for future implementation.” This reframes the decision from “we are behind” to “we are prudent,” which is a message shareholders can appreciate.

Many experts are concerned that allowing companies to create their own definition of AI will result in “opportunistic word games” and “PR spin.” To create genuine clarity, what essential components should a company build into its definition of AI for its SEC filings?

To avoid this loophole, a robust definition needs to be grounded in operational reality, not marketing buzzwords. I would advise companies to include at least three core elements. First, specify the category of technology. Are we talking about predictive machine learning, natural language processing, or the kind of generative and agentic AI that has recently dominated the conversation? Second, describe the level of human oversight. Is it a tool that assists a human expert, or is it making fully autonomous decisions? This speaks directly to risk. Finally, anchor the definition to its business function. A definition that states, “We define AI as autonomous decision systems used in our hiring and customer service operations,” is infinitely more valuable to an investor than a vague statement about “leveraging intelligent systems.” This kind of specificity makes it much harder to, as Braden Perry warned, redefine the term to suit the story of a given quarter.

Braden Perry voiced strong skepticism that these filings will offer anything more than the boilerplate language we see in many cybersecurity reports. How can a company provide concrete, meaningful details about its AI strategy without handing its trade secrets over to competitors or running afoul of the SEC’s crackdown on “AI washing”?

The key is to focus on the “how,” not the “what.” You don’t need to disclose the proprietary code of your algorithm, but you should absolutely disclose the governance framework that surrounds it. This is how you provide substance without revealing the secret sauce. For instance, a company could state: “Our AI model for supply chain optimization is subject to a three-tiered review process, including a data ethics board, a legal compliance check, and continuous performance monitoring against pre-defined bias metrics. We conduct quarterly audits to validate its outputs against real-world results.” This is specific, it’s verifiable, and it directly counters “AI washing” by showing the SEC that your claims are backed by a real, structured process. It turns the disclosure from a marketing claim into a testament to your operational maturity.

IAC member John Gulliver described isolating AI’s specific impact on hiring or customers as an “impossible task.” Compounding that, Rob Lee pointed out the challenge of unsanctioned “shadow IT” AI tools. What kind of process could a company realistically implement to even begin quantifying these impacts and getting its arms around rogue AI use?

While I sympathize with the view that it’s a “difficult guessing game,” it’s not impossible; it just requires discipline. To quantify impact, companies need to establish clear baselines before deploying an AI tool. For hiring, what is your time-to-hire or diversity metric today? Deploy the AI in a pilot program and measure the change against a control group. The data will tell the story. The “shadow IT” problem is thornier but addresses the core of modern data governance. You can’t track what you can’t see. Companies need to implement robust network monitoring and data loss prevention tools that can flag when employees send sensitive corporate data to unsanctioned, public AI platforms. This must be paired with a crystal-clear acceptable use policy and employee training. It’s not just a disclosure issue; it’s a fundamental security and governance challenge that the proposed rule rightly forces companies to confront.

What is your forecast for corporate AI transparency over the next five years? Will rules like this lead to meaningful insight for investors, or will we see a wave of boilerplate filings and “get-out-of-jail-free cards” as some experts predict?

I believe we’ll see a two-phase evolution. The first one to two years will likely be dominated by cautious, boilerplate filings. Legal teams will advise clients to say as little as possible, leading to the kind of generic disclosures that critics fear. We’ll see a lot of those “get-out-of-jail-free cards” that Rob Lee mentioned. However, that phase won’t last. The SEC has already shown its teeth with “AI washing” enforcement, and that scrutiny will only intensify. As investors become more sophisticated in their understanding of AI, they will start demanding more. The market will begin to differentiate between companies that offer vague assurances and those that provide clear, governance-focused disclosures. Meaningful transparency will become a competitive advantage, forcing the laggards to catch up. The rules may start as a compliance headache, but they will ultimately catalyze a much-needed maturation in how corporations govern and communicate their use of this transformative technology.

Trending

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later

Subscribe to Newsletter

Stay informed about the latest news, developments, and solutions in data security and management.

Invalid Email Address
Invalid Email Address

We'll Be Sending You Our Best Soon

You’re all set to receive our content directly in your inbox.

Something went wrong, please try again later