
When one of the country’s largest financial institutions announced in early January that it would stop using external proxy advisory firms and instead rely on an internal AI system to guide how it votes on shareholder matters, the move was widely framed as an investor story. But its implications extend well beyond asset managers.
For corporate boards, the shift signals something more fundamental: governance is increasingly being interpreted not just by people, but by machines. And most boards have not yet fully reckoned with what that means.
Why Proxy Advisors Became So Powerful
Proxy advisory firms did not set out to become power brokers. They emerged to solve practical problems of scale and coordination.
As institutional investors came to own shares in thousands of companies, proxy voting expanded dramatically, covering everything from director elections and executive compensation to mergers and an array of shareholder proposals. Voting responsibly across that universe required time, expertise, and infrastructure that many firms did not have.
Proxy advisors filled that gap by aggregating data, analyzing disclosures, and offering voting recommendations. Over time, a small number of firms came to dominate the market. Their influence grew not because investors were required to follow them, but because alignment was efficient, defensible, and auditable.
Just as important, proxy advisors addressed a coordination problem that had left shareholders effectively voiceless. Their intellectual roots lie with activists such as Robert Monks, who believed dispersed ownership had allowed corporate power to become insulated from challenge. The aim was not to automate voting, but to help shareholders act collectively; to deliver uncomfortable truths to management that might otherwise never reach the top. Over time, however, the mechanisms built to carry that judgment increasingly substituted for it, as scale, standardization, and efficiency crowded out confrontation.
What began as a method to coordinate shareholder judgment increasingly became, in practice, a substitute for it.
Why the Model Is Changing
The forces that allowed proxy advisors to scale also exposed the tension between efficiency and judgment.
Standardized policies brought consistency, but often at the expense of context. Complex governance decisions, CEO succession timing, strategic trade-offs, board refreshment, were increasingly reduced to binary outcomes. Political and regulatory scrutiny intensified. And asset managers began asking a fundamental question: if proxy voting is a core fiduciary responsibility, why is so much judgment outsourced?
The result has been a gradual reconfiguration. Proxy advisors are moving away from one-size-fits-all recommendations. Large investors are building internal stewardship capabilities. And now, artificial intelligence has entered the picture.
What AI Changes, and What It Doesn’t
AI promises what proxy advisors once did: scale, consistency, and speed. Systems are designed to process thousands of meetings, filings, and disclosures efficiently.
But AI does not eliminate judgment. It relocates it.
Judgment now lives upstream, in model design, training data, variable weighting, and override protocols. Those choices are no less consequential than a proxy advisor’s voting policy. They are simply less visible.
Where proxy advisors once aggregated shareholder voice to challenge managerial power, AI risks making that challenge quieter, cleaner, and harder to trace.
For boards, this changes the audience for governance disclosures. It is no longer only human analysts reading between the lines. Increasingly, it is algorithms reading literally, historically, and without context, unless boards provide that context themselves.
The Governance Questions Boards Haven’t Been Asking
This shift raises a set of questions many boards have not yet fully engaged.
How are we being assessed? AI systems can draw from filings, earnings calls, websites, media coverage, and other public sources. Governance signals now accumulate continuously, not just during proxy season.
Where could we be misread? Language that works for human readers: nuance, discretion, evolving commitments, can confuse machines. Ambiguity may be interpreted as inconsistency. Silence can be read as risk.
And when something goes wrong, who is accountable? There is no universal appeals process for AI-informed proxy votes. Responsibility may ultimately rest with the asset manager, but escalation paths may be opaque, informal, or slow, particularly for routine votes.
Boards should assume that if an algorithm misinterprets their governance, there may be no analyst to call and no clear way to correct the record before a vote is cast.
Consider This Scenario
A company’s board chair shares a name with a former executive at another firm who was involved in a governance controversy several years earlier. An AI system scanning public information associates the controversy with the wrong individual, quietly elevating perceived governance risk ahead of director elections.
At the same time, the board delays CEO succession by a year to preserve stability during a major acquisition. The decision is thoughtful and intentional, but the rationale is scattered across filings, earnings calls, and investor conversations. The AI system flags the delay as a governance weakness.
Days before the annual meeting, a third-party blog posts speculative criticism of board independence. The claims are unfounded but public. The AI system ingests the content before any human review occurs.
The board never sees the errors. There is no analyst to engage, only a voting outcome to react to after the fact.
None of this requires bad actors or malicious intent. It is simply what happens when scale, automation, and ambiguity intersect.
What Boards Can, and Cannot, Do
Boards cannot control how asset managers design their AI systems. Nor should they try to optimize disclosures for algorithms.
But boards can govern differently.
Some boards are already experimenting with clearer narrative disclosures including more explicit explanations of governance philosophy, how trade-offs are made, and how judgment is exercised. Not because algorithms “care,” but because humans still design, supervise, and sometimes override these systems.
Clarity reduces the risk of misinterpretation. Consistency lowers the cost of human review. Context makes it easier for judgment to survive automation.
This does not mean boards should explain every decision publicly or eliminate discretion. Over-disclosure carries its own risks. But it does mean being deliberate about which judgments require context to be understood, and which cannot safely be left to inference.
Boards should also rethink engagement. Conversations with investors can no longer focus solely on policies and outcomes. They should include questions about process: where human judgment enters, what triggers review, how factual disputes are handled, and how quickly errors can be corrected.
This is not about mastering AI. It is about understanding where accountability lives when governance decisions are mediated by machines.
Governance in an Algorithmic Age
In an AI-assisted voting environment, some familiar assumptions no longer hold.
Silence is rarely neutral. Ambiguity is rarely benign. And consistency, across time, across platforms, across disclosures, will become a governance asset.
The shift matters now because proxy voting outcomes are increasingly shaped before boards realize a conversation needs to happen.
The boards that navigate this transition best will not be those optimizing for scores or checklists. They will be the boards that document judgment, explain trade-offs, and tell a coherent governance story that holds up whether it is read by a human analyst, a proxy advisor, or a machine.
That is not a technology challenge.
It is a governance one.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.











