
AI-driven automation of labor isn’t just coming for legitimate businesses.
Hundreds of thousands of workers—hailing from over 50 countries—are currently trapped within Southeast Asia’s sprawling scam centers, according to estimates by the United Nations.
But humanitarian experts think these workers may soon be replaced by artificial intelligence.
In some scam centers, messages initiating contact between scammers and potential victims are already being crafted and sent by AI, says Ling Li, a researcher and co-author of Scam: Inside Southeast Asia’s Cybercrime Compounds.
“Time is ticking, because large language models may eventually replace even the subsequent steps of pig butchering scams,” she adds. (“Pig butchering” refers to a common scam variant where criminals build up relationships with their victims before defrauding them–like how a farmer might fatten up a pig before slaughtering it)
Yet experts fear that automation might make it more difficult to bust crime syndicates, as foreign governments lose interest in fighting the problem when their citizens are less at risk from human trafficking.
Governments throughout Asia and beyond have pressured Southeast Asian countries like Thailand, Cambodia and Myanmar to crack down on job scams, human trafficking and scam centers. This pressure often comes after a high-profile incident, such as when Chinese actor Wang Xing was kidnapped in Thailand in January, or when a Korean tourist was found murdered near a Cambodian scam compound.
This outrage over the scam industry has pushed countries like the U.S., U.K. and South Korea calling for action to take down criminal syndicates. Mounting international pressures has pushed Cambodia and Myanmar to crack down on these criminal gangs, leading to the arrest of thousands.
Governments and other NGOs may withdraw from the fight against scam centers if their citizens are less at risk from human trafficking, Li says. This change will also make it more difficult for law enforcement agencies to identify informants who can divulge inside information.
Yet Stephanie Baroud, a criminal intelligence analyst from Interpol, isn’t so sure that AI will lead to a drop in human trafficking. Instead, criminal networks will use their well-established trafficking networks for other purposes. “We cannot really say that AI will end trafficking. It will simply reshape what we are seeing,” Baroud says.
Tech being weaponized
Scam syndicates are turning to other private-sector products, like stablecoins and fintech apps, to facilitate crime, says Jacob Sims, an expert on transnational crime and human rights in Southeast Asia.
Traditional financial institutions like banks have a clear interest in eradicating scam activity from their platforms. “Every time someone gets scammed, that’s money leaving their platform and customers lose trust in them—so it’s a lose-lose for the banks,” Sims says.
Cryptocurrency exchanges, now trying to clean their reputations and legitimize themselves as responsible financial actors, also don’t want anything to do with scammers, he adds.
Social media and messaging apps, however, are a different story. Criminal activity drives an enormous amount of traffic on these platforms, Sims says, adding that a large number of both trafficking and scam victims have been recruited on Facebook.
“When it comes to fact-checking or content moderation, we’re seeing a big rollback in terms of the strictness of platform policies and guidelines,” says Hammerli Sriyai, a visiting fellow at the ISEAS-Yusof Ishak Institute in Singapore. She cites WhatsApp, which relies on users to report false information or content that is against community guidelines. “They don’t do their own sampling or vetting, but are shifting the responsibility to users.”
“We aggressively fight fraud and scams because people on our platforms don’t want this content and we don’t want it either,” said a Meta spokesperson in response to a request for comment. “As scam activity becomes more persistent and sophisticated, so do our efforts.”
The spokesperson added that since the start of 2025, Meta has detected and disrupted close to 8 million accounts on Facebook and Instagram associated with criminal scam centers. From January to June, the company banned over 6.8 million WhatsApp accounts linked to scam centers.
If social media platforms want to effectively tackle fraud, they’d need to use tactics that generate false positives–which they don’t want to happen. “Tech firms don’t want to be more aggressive than they need to be (with regards to cracking down), as this may prevent some users from accessing the platform,” Sims says.
Scam centers are also weaponizing internet service providers. An October investigation by AFP uncovered that over 2,000 Starlink devices—a satellite internet service provided by Elon Musk’s SpaceX—were being used by scam centers in Myanmar.
This highlights how easily legitimate technology can be exploited by scam operations, underscoring the need for clearer licensing, proper user verification and cooperation with regulators, says Joanne Lin, a coordinator from the ISEAS-Yusof Ishak Institute.
SpaceX swiftly disabled the devices when evidence of Starlink receivers in scam centers was uncovered.
Sriyai notes it may be difficult to stop tech from being co-opted by criminals.
“Many commercial businesses don’t know that their products are being used by scam operations,” she says. “But their response is what matters. In other words, how would this business deal with their bad clients? I think that’s more important.”











