
RunSybil, an AI cybersecurity startup that uses AI agents to automatically hack company software to find security weaknesses, has secured $40 million in venture capital funding.
The round was led by Khosla Ventures, with participation from S32, the Anthology Fund from Anthropic and Menlo Ventures, Conviction and Elad Gil, along with angel investors including Nikesh Arora, Amit Agarwal, Jeff Dean, and other founders and leaders from companies including OpenAI, Palo Alto Networks, Stripe and Google.
The company did not disclose the valuation it achieved in the new funding round.
The company’s AI agent, Sybil, conducts continuous autonomous penetration tests against live applications—finding, exploiting and documenting real security vulnerabilities without humans in the loop. That’s different from other security tools currently making headlines, such as Claude Code Security, which analyzes source code in applications for known vulnerabilities before it is deployed.
RunSybil instead tests software that is already running, probing live systems the way a hacker would—by exploring systems, chaining vulnerabilities together and testing authentication boundaries to find paths to sensitive data.
Automating ‘ethical hacking’
Companies have long relied on a mix of penetration tests—where outside security experts, or “ethical hackers,” try to break into their systems; bug bounty programs that reward independent hackers for reporting flaws; and internal “red teams” that simulate real cyberattacks. RunSybil says its AI system can automate much of that work, continuously probing applications for vulnerabilities as new code is deployed.
RunSybil argues this kind of automation is becoming necessary as AI reshapes how companies operate. Procurement, legal, finance, engineering and operations are all being rebuilt with AI—including the growing use of AI agents. Yet security testing is still often treated as a discrete, scheduled event managed by a separate team on its own timeline. That mismatch can be especially challenging for highly regulated industries such as finance, insurance and health care, which face strict legal and audit requirements around cybersecurity.
RunSybil was co-founded in 2023 by Ari Herbert-Voss, who joined OpenAI as its first security research hire in 2019, and Vlad Ionescu, who previously led offensive security red teams at Meta. Together, they say they represent a rare intersection: people who understand how to build frontier AI systems and how to hack into complex software.
“We check every box that needs to be checked—for auditors, regulators and compliance teams,” Herbert-Voss said. But the real work, he said is transforming where, when and how customers discover and fix security issues: “Not as a project, but as a permanent capability embedded in how they build.”
‘On the edge’ of the AI security frontier
Vinod Khosla, who made an early bet on OpenAI in 2019 and often invests in companies he considers to be on the technological frontier, told Fortune that “what it takes to add security and penetration testing to the AI world is definitely frontier—RunSybil is on the edge.” There is currently little competition in this part of the offensive security market, he said, though security incumbents such as Palo Alto Networks may eventually move into the space.
For now, “nobody’s really knowledgeable about it except individuals like [Herbert-Voss],” he said, adding that he has long been concerned about AI’s cyber capabilities falling into the hands of adversaries such as China. “We invest in founders who tackle large, unsolved problems with technically ambitious solutions,” he added. “[Herbert-Voss and Ionsecu] are building exactly the kind of platform security teams will need as software complexity and AI-driven development accelerate.”
Herbert-Voss has long been steeped in both hacking and AI. Growing up in a mostly Mormon community in Utah, he said he was drawn to the online hacker scene in middle and high school but pivoted away after friends “started getting arrested.” While pursuing a Ph.D. at Harvard University studying machine learning and ways to make algorithms more efficient, he first heard about OpenAI.
He dropped out of Harvard, he said, after becoming convinced that the rapid scaling of AI models—training larger systems with more data and computing power—would unlock powerful new capabilities.
Evolving cyber capabilities with LLMs
“Once OpenAI dropped GPT-2, I said wow, this changes everything about the economics of what it would take to run a cyber campaign,” he explained. He sent a couple of hacker demos to OpenAI CEO Sam Altman and Jack Clark, then-head of policy at OpenAI who went on to co-found Anthropic. Both of them expressed their concerns about the potential misuse of LLMs and asked Herbert-Voss to come on to do security research.
But by 2022, Herbert-Voss said he also began to see how quickly offensive cyber capabilities could evolve once powerful language models became widely available, including to malicious actors. Those same advances, he said, could dramatically expand cyber threats. That led to Herbert-Voss’s decision to leave OpenAI and start RunSybil as a research project.
RunSybil currently works with startups including Cursor, Turbopuffer, Notion, Baseten, and Thinking Machines Lab, as well as what the company says are major financial institutions and Fortune 500 companies. (The company declined to name any of those Fortune 500 or financial customers.) Herbert-Voss said that customers have already reported finding critical vulnerabilities that had gone undetected using traditional methods.











