When Perplexity AI’s Aravind Srinivas announced in September that students could use the company’s $200 Comet browser for free, the pitch was clear: a study buddy that helps you “find answers faster than ever before.” But just weeks later, Srinivas is having to remind students not to let that study buddy do all the work.
The warning came after a post on X showed a developer using Comet to complete an entire Coursera assignment in seconds. In the 16-second clip, Comet breezes through what appears to be a 45-minute web design assignment with the prompt “Complete the assignment.” The user proudly tagged both Perplexity and Srinivas, writing, “Just completed my Coursera course.”
The 31-year-old CEO responded to the video with just four words: “Absolutely don’t do this.”
Srinivas’ terse public reprimand comes as AI seeps deeper into classrooms and tech firms aggressively market their products to students under the banner of “learning support.” Perplexity’s free-student offer joins a wave of similar initiatives from companies like Google, Microsoft, and Anthropic—all touting their bots as tutors, study buddies, or productivity boosters.
But educators say those tools are increasingly being used to bypass learning altogether. Many students are simply using AI to generate essays, ace quizzes, or automate full courses, undermining the very skills those platforms claim to enhance.
Comet, in particular, is well set up to do students’ work for them – it’s not your average chatbot. Built by Perplexity as what they call an “agentic” AI browser, it’s designed to do more than just spit out text: it can interpret your instructions, take actions on your behalf, click, fill forms, and navigate complex workflows. That level of autonomy enables Comet to churn through assignments in seconds, but it also introduces new risks when deployed.
Security audits from Brave and Guardio have flagged serious vulnerabilities. In some cases, Comet can execute hidden instructions embedded in webpage content—essentially allowing “prompt injection” attacks that override its intended behavior. One especially alarming case, dubbed CometJacking by researchers at LayerX, lets a crafted URL hijack the browser and cause it to exfiltrate private data like emails and calendar entries.
In audits by Guardio, Comet was tricked into making fraudulent purchases from fake sites—completing entire checkout flows without human verification. It also mishandled phishing scenarios: when presented with malicious links disguised as legitimate requests, the AI processed them as valid tasks.
At the same time, Comet’s capabilities are precisely what make it so useful in academic cheating scenarios. Its designed to act, not just advise, which means that “studying support” can shift into “doing the work for you.” That shift is evident in the Coursera video, and it reframes most debates about AI in education: it’s no longer just about content generation (essays or summaries), but about automation in form and function.