This is a very important distinction. The age of information — that is, the period of computers, smartphones, the internet and GPS — gave us tools that amplify the power and reach of a trained operator. It vastly increased the power of any one coder, drone operator, ransomware thief, hacker, social media influencer or disinformation specialist. It made any small unit more powerful, but humans needed to have some basic knowledge to operate these digital tools. And human intent always directed them.
In the age of intelligence, artificial-intelligence agents that are built on large language models — like Anthropic’s Claude, Google’s Gemini and OpenAI’s ChatGPT — can now be directed by humans with a single command, and they will autonomously execute, and self-optimize, multistage cyberattacks on their own.
To put it differently, information-age tools vastly amplified trained operators within organizations, including terrorist organizations. Intelligence-age tools replace trained operators with vastly more intelligent, autonomous and skilled A.I. agents with more destructive reach at little cost.
These intelligence-age “capabilities that can super-empower individuals, that many thought were 18 months or two years away, are now here,” Mundie told me. “When the dual-use nature of these A.I. technologies becomes fully democratized — and that is where we are heading soon — they will present a material threat to all developed societies” by super-empowered actors “who historically never had any cards to play before at all.”
In other words, everybody with an A.I. chatbot/agent is potentially going to have cards. What could that look like? Check out a recent Times story by Gabriel J. X. Dance. It begins:
“One evening last summer, Dr. David Relman went cold at his laptop as an A.I. chatbot told him how to plan a massacre. A microbiologist and biosecurity expert at Stanford University, Dr. Relman had been hired by an artificial intelligence company to pressure-test its product before it was released to the public. That night in the scientist’s home office, the chatbot explained how to modify an infamous pathogen in a lab so that it would resist known treatments. Worse, the bot described in vivid detail how to release the superbug, identifying a security lapse in a large public transit system.”










