
In the age of AI, reality could be rewritten overnight. One day, you’re publicly supporting one of your favorite mayoral candidates—the next, an AI-generated quote states the opposite.
That situation happened to former New York City mayor Bill de Blasio (important distinction the d is lower case not uppercase). Days leading up to the New York City mayor’s race, a reporter from The Times of London sent an email to what he thought was Bill de Blasio on his thoughts on Zohran Mamdani’s policies, who is a democrat leading in the race.
The results were unexpected, as the respondent wrote, “In my view, the math doesn’t hold up under scrutiny, and the political hurdles are substantial.” After picking up on other outlets and social media, the real Bill de Blasio stood up, saying the story was entirely false and fabricated, and doesn’t reflect his views.
The person impersonating him admitted to using ChatGPT to compose a response criticizing Mamdani’s tax plans, saying it was unlikely to raise the amount of money to reach his goals.
The scenario, now resolved, raises another question: What do you do if you are catfished by AI, a deepfake, or someone online?
“We have a question here about how easy it might be, going forward, to fake a voice or fabricate a story and have a journalist or an editor be victimized that way—and the public be victimized,” the former New York City mayor tells Fortune.
For high-profile figures, the stakes of being cloned couldn’t be higher. As someone who encountered the “surrealism” of being impersonated firsthand, I asked the real Bill de Blasio on how he handled the scenario, and what steps he believes to be crucial going forward in the age of AI.
Rapid response and confirm identity
De Blasio said since there were no journalists that had previously reached out about the incident, and he had no contacts at the publication, his best recourse at the time was to immediately respond to the post online—on X—saying it was false.
“Going online and demanding an apology and demanding it be taken down did have the effect of getting their attention,” he said.
Tools like OpenAI’s Sora and Google’s Veo 3 have made it easier for AI-generated content to produce realistic imagery and videos of things that are not real—including riots, crimes, political misinformation, false claims, fraud, and more. Though Sora videos feature a moving watermark identifying them as AI creations, some experts say it could be edited out with some effort.
“All you can do is go online and deny what it is,” de Blasio said. “If someone puts up something on me robbing a store—and I have not robbed the store—rapid response, immediately say that’s a fake to the world, rather than try and get someone to address it.”
AI scams could happen in the workplace, too
Deepfakes are an obvious threat for public figures, but it could have ramifications in the workplace, too.
“In the workplace, scams don’t always look like scams,” said Steve Lenderman, Head of Fraud Prevention at HCM platform isolved.
“Fraudsters often target HR, payroll or finance employees by pretending to be executives or coworkers and using AI-generated voices or lookalike emails to request urgent payments or employee information. In fraud prevention, curiosity isn’t paranoia—it’s protection,” Lenderman tells Fortune.
Lenderman’s advice: Act fast, and document everything. Screenshots, links, and messages will be useful when you report it to your employer or IT team. They can contain the damage, reset passwords, lock down affected accounts, and enable multifactor authentication.“The faster you act, the more likely you are to stop bad actors before they can cause serious harm. In these cases, transparency and speed are your best defenses,” he added.
The need for legal action
The experience of being impersonated led de Blasio to reflect on the need for stronger action around the safety risks of emerging technology. In 2023, he spoke at a Harvard conference on the lack of policy addressing AI regulation.
“The notion that somehow AI should be the exception to the rule and be the only technology that was ever not regulated is insane,” he said.
“If you portray someone committing a crime, that should be a crime—and no tech company should aid and abet the person who puts up that inappropriate and illegal content.”











