
In retrospect, artificial intelligence was always going to be as much a capital markets story as a technological one. Once narratives became as important as capabilities, concerns about so-called “AI washing” were inevitable. Just a year after the public release of ChatGPT, regulators began sounding the alarm. In March 2024, the U.S. Securities and Exchange Commission brought charges against two investment advisory firms — Delphia (USA) Inc. and Global Predictions Inc. — over statements about their use of AI in investment advisory services. Regulators alleged that the firms promoted AI-driven investing capabilities they could not substantiate, including one firm’s claim that it was “the first regulated AI financial advisor.”
The AI wash cycle isn’t over. Of the 51 AI-related securities class actions filed in the last five years, a significant majority included allegations that companies overstated or misrepresented their artificial intelligence capabilities, according to securities litigation data compiled by the consulting firm Secretariat.
But the more notable trend today is that many disputes no longer hinge on whether AI exists at all.
Some of the first AI-washing cases resembled traditional fraud allegations, with critics arguing that the technology being marketed simply did not exist. But the disputes also revolve around more nuanced questions: Does the AI meaningfully change the economics of the business?
This distinction matters. A company may indeed deploy machine learning models or automated analytics while investors question whether those systems materially improve margins, increase revenue, or create defensible competitive advantages.
Despite the clear incentives to boast, companies must be disciplined and precise in describing AI capabilities. Claims about artificial intelligence must be technically accurate, operationally supportable, and consistent with the company’s financial results.
The consequences for not being precise can be significant. Companies that overstate their capabilities may face regulatory investigations, securities litigation, reputational damage, and valuation pressure.
Recent market episodes illustrate how quickly these narratives can collide with investor scrutiny. The data engineering firm Innodata, Inc. offers one example. The Motley Fool website recently called the company a “hidden gem in booming AI market.” But in early 2024, a short seller accused it of exaggerating the role of artificial intelligence in its business model, leading to a class action lawsuit and a 30% drop in its share price. While the company clearly operates in the AI ecosystem, it has had to defend its disclosures.
Investors themselves also face risks in a narrative-driven environment. Private equity firms, for example, are currently operating in a deal market characterized by fewer transactions and intense competition for assets. In such conditions, the pressure to deploy capital and maintain relevance with limited partners can create incentives to accept ambitious technological narratives with less rigorous diligence than would normally be applied.
Artificial intelligence claims can be particularly difficult to verify during compressed deal timelines. Evaluating the quality of machine learning models, data infrastructure, and deployment capabilities often requires specialized technical expertise. Without careful scrutiny, investors risk paying premium valuations for technological capabilities that are still experimental, limited in scope, or economically immaterial.
The current cycle of AI claims resembles the rapid rise of environmental, social, and governance investing. The era produced a wave of ambitious corporate sustainability narratives, followed by increasing regulatory and litigation scrutiny over so-called “greenwashing.”
The lesson from ESG is instructive. Even when companies genuinely believe in the long-term potential of their strategies, vague or inflated narratives can create legal exposure. When disclosures outpace verifiable operational reality, they invite scrutiny from regulators, investors, and short sellers alike.
Artificial intelligence is now in a similar phase.
History also teaches us that periods of technological enthusiasm are often followed by tighter disclosure standards. The late-1990s dot-com boom is instructive. At the time, appending “.com” to a company’s name could result in immediate valuation spikes. Business models were sometimes loosely defined, and disclosure practices did not always keep pace with investor excitement surrounding the emerging internet economy.
Of course, eventually the bubble burst. Congress enacted the Sarbanes–Oxley Act of 2002, which dramatically strengthened corporate disclosure requirements and executive accountability. Narrative-driven valuations that once fueled investor excitement became sources of legal risk if the underlying disclosures proved inaccurate or misleading.
Yet the broader lesson of the dot-com era is not that technological enthusiasm was misplaced. Many companies born during that period ultimately became some of the most influential firms in the global economy. What changed was not the trajectory of innovation, but the standards governing how companies communicated with investors.
Artificial intelligence is likely to follow a similar trajectory. Today’s market rewards ambitious AI narratives, and the boundaries of disclosure are still evolving. But if history is any guide, greater regulatory scrutiny and more precise disclosure expectations are likely to follow. Companies need to communicate innovation with sufficient clarity and discipline to avoid turning their words into legal risk.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.











