Image

Amazon’s AI gamble: Can its new Nova fashions compete with OpenAI, Google, and Anthropic?

After nearly two years of speculation about when—or even if—Amazon would launch its own competitive family of AI models to challenge OpenAI, Google, and Anthropic, the company finally delivered a mic drop moment at its AWS re:Invent conference. Amazon announced a new family of AI models for text, images and video, called Nova, that it claimed have “state-of-the-art intelligence” across many tasks, are 75% cheaper than top-performing industry competitors, and are already powering 1,000 internal Amazon applications 

The question on many Amazon-watchers minds, however, was: Why? After all, Amazon has invested $8 billion into Anthropic, including an additional $4 billion announced just last week. As part of the deal, Anthropic has also committed to using Amazon’s AWS cloud and Amazon’s custom AI computer chip, called Trainium, to train and run its models. 

Amazon wants control over its own AI destiny

But Amazon clearly has no intention of relying solely on external partners when it comes to its AI strategy. It does not want to be beholden to Anthropic or any other third party. It wants to drive the cost of its AI offerings to cloud customers way down—a low-cost strategy that Amazon’s AWS cloud computing service has long-pursued. Doing so would be harder if it was only using Anthropic’s models. Finally, the company says its customers wanted capabilities, such as video, that Anthropic does not currently offer.

The Nova releases are part of a larger, perhaps master plan to blaze its own path towards AI dominance—which also includes building what it says will be the world’s largest AI supercomputer, called Project Rainier, which will include hundreds of thousands of Amazon’s Trainium chips working in unison. 

In a keynote speech yesterday at re:Invent, Amazon CEO Andy Jassy’s cited three lessons about how Amazon is developing its AI strategy. One, he said, is that as organizations scale their generative AI applications, “the cost of compute really matters”—so a cheaper, commoditized model becomes increasingly important. 

Next, Jassy said AI success won’t simply come from a capable model, but one that tackles issues like latency—the time it takes to get the information or results you need from a model—as well as user experience and infrastructure issues. In addition, he insisted that Amazon wants to give customers, both internal and external, a diversity of models to choose from. 

“We have a lot of our internal builders using [Anthropic’s] Claude,” he said. “But, they’re also using Llama models. They’re also using Mistral models, and they’re also using some of our own models, and they’re also using home-grown models themselves.” 

This was surprising at first, he said, but added that “we keep learning the same lesson over and over and over again, which is that there is never going to be one tool to rule the world.” 

This appears to be the crux of Amazon’s master AI plan: Ben Thompson, a business and tech analyst and author of Stratechery, wrote yesterday that Jassy’s comments about AWS’s AI strategy “looks a lot like the AWS strategy generally.” Just as AWS offers plenty of choice in processing or databases, AWS will offer AI model choice on its Bedrock service. And that will include Amazon’s own Nova models, which just so happen to be the likely-cheapest option for third-party developers. Amazon is betting that AI will become a commodity, he said: “AWS’s bet is that AI will be important enough that it won’t, in the end, be special at all, which is very much Amazon’s sweet spot.” 

Amazon needs to strike a balance between cost and performance

However, some argue that lower-cost models aren’t the key to building reliable AI applications. Instead, they maintain that performance takes precedence over cost when it comes to creating effective, high-quality solutions. There’s a question whether Amazon’s Nova models, which it claims are “as good or better” than rival AI software across many, but not all, benchmark tests, will be seen as good enough by developers to persuade them to make the jump.

This might be the balance Amazon is aiming for, but it’s unclear whether they’ve struck the right one. I spoke yesterday to Rohit Prasad, Amazon’s SVP and head scientist for AGI (artificial general intelligence), who told me that the name Nova was purposeful—signalling a new and very different generation of AI models of “exceptional quality.” 

When I asked why Amazon did not turn to Anthropic to build new models for it, Prasad pointed to Amazon’s own “urgent” internal customer needs such as generating videos—something he said Anthropic does not offer, to the best of his knowledge. “We have our Prime Video team that wants to recap seasons, and they can do it with the video understanding capability of the model,” he said. “Amazon ads needed models that can generate videos.” 

Prasad would not comment on Amazon’s long-term roadmap when it comes to its AI models, but he did say that there will be “more paradigm changes to come,” including more capable models. In the meantime, he said that the Nova models are available to all internal Amazon teams, including the one the working on a new generative Amazon’s Alexa digital assistant (which, as I reported back in June, has been a long, less-than-successful slog). 

Amazon wants to give its customers choice among models from a variety of different vendors, he emphasized. The question is whether the same strategy that worked so well for Amazon’s AWS in the past—offering low cost, product choice, and flexibility—will pay off again in this new era of AI?

How many degrees of separation are you from the globe’s most powerful business leaders? Explore who made our brand-new list of the 100 Most Powerful People in Business. Plus, learn about the metrics we used to make it.

SHARE THIS POST