Image

Opinion | Can an A.I. Company Ever Be Good?

Over three decades of watching the tech industry and watching big companies grow from tiny teams to global powers, I’ve observed the same pattern: Ethics don’t scale up. Tech companies like to start with a mission. Google wanted to connect the world’s information; Microsoft wanted to put a computer on every desktop; Twitter wanted to give all people a platform to publish their thoughts. These are good ideas — the stuff of TED Talks. But users show up with their own beliefs and ideas, by the millions. As a tech founder, you end up putting enormous work into making users behave (and stopping them from breaking the law). Lawsuits pour in, saying you did wrong, some because you’re a convenient target.

All the while, money keeps gushing in. You start out transparent, sharing your journey, but then before an initial public offering of shares, you must honor the S.E.C.-mandated quiet period and restrict promotional communications. After that, the transparency never quite returns. The market demands a rising stock price. Your company still makes a lot of software, but a huge amount of time goes to tax strategy and compliance.

At that scale, people start to blur together, and human users can become aggregate pools of statistics and growth vectors that go up and down — a mulch into which you plant your products.

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade.

OpenAI set out to be inherently good — a dot-org. But it stumbled into a seam of pure digital gold in the form of large language models. To develop that technology further, it has made a painful, awkward transition to being a dot-com. (OpenAI says the for-profit arm continues to be overseen by the original nonprofit entity.) The subsequent level of drama has been difficult to behold. A few years ago, Mr. Altman publicly called for industry regulation, and he still does, but OpenAI has also lobbied against it — for example, supporting an Illinois bill that, if it becomes law, will limit the liability of A.I. companies in mass deaths.

SHARE THIS POST