Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

AI-Driven Scams: The Rise of Deepfake Models Used to Dupe Victims Out of Their Money



AI-driven scams are using deepfake models to dupe victims out of their money. Job postings for AI models on Telegram promise high salaries, but require excessive working hours, little free time, and relentless schedules. These job ads often use language closely aligned with scams and include red flags indicating exploitation. As cybercriminals continue to adopt AI and use face-swapping as part of their online scamming, it is essential to be aware of these tactics and take steps to protect oneself from falling victim to scams.

  • Dozens of job postings for "AI models" and "real face models" have been found on Telegram, promising high salaries and remote work opportunities.
  • The job applications require personal information, such as videos, photos, and texts, and often include red flags like excessive working hours and coercive language.
  • Many applicants are young women from various countries, including Turkey, Russia, Ukraine, Belarus, and multiple Asian nations.
  • These job postings often use scamming tactics, such as deepfake video calls and references to cryptocurrency investments or gold trading.
  • Some job postings claim that the company will retain applicants' passports for visa and work permit management, raising concerns about exploitation and coercion.
  • The recruitment process involves a series of red flags, including excessive working hours, little free time, and relentless schedules.
  • Even models who may be recruited to work in these roles face harsh treatment from bosses and potential manipulation through deepfake technology.



  • In recent months, a disturbing trend has emerged on Telegram, a popular messaging platform, where dozens of job postings for "AI models" and "real face models" have been reviewed by WIRED. These job advertisements promise high salaries, ranging from $7,000 per month to tens of thousands of dollars, with the opportunity to work remotely in Southeast Asia. However, the reality behind these job postings is far more sinister.

    The job applications require individuals to send short videos introducing themselves, text about their experience and expectations, and photographs of themselves. Some applicants are also asked to include their marital status and "vaccination" status, raising concerns about the potential for exploitation and coercion. Despite the promise of high pay, many of these applicants are young women from various countries, including Turkey, Russia, Ukraine, Belarus, and multiple Asian nations.

    These job postings often use language closely aligned with scams, including frequent mentions of "clients," a term used instead of "victims," as well as references to cryptocurrency investments or gold trading. The ads also frequently require applicants to have Chinese language skills, which is unusual for legitimate jobs in the region. Furthermore, many of these job postings are located in known scamming sites in Cambodia, and some even claim that the company will retain the applicant's passport for visa and work permit management.

    Hieu Minh Ngo, a cybercrime investigator at the Vietnamese scam-fighting nonprofit ChongLuaDao, has identified around two dozen channels on Telegram that have posted job listings for AI models. Ngo believes that these job postings are likely used to dupe victims out of their money, with deepfake video calls and models who have their faces swapped being used to manipulate potential scam victims.

    One woman, Angel, from Uzbekistan, applied for an AI model role in Cambodia, claiming to have a year's experience as an AI model. Her application included a selfie-style video where she talked up her language skills, stating that she could speak fluent English, good Chinese, Russian, and Turkish. However, it is likely that this impressive language skill was put to use as part of an elaborate "pig-butchering" scam targeting Americans.

    The recruitment process for these AI models involves a series of red flags, including excessive working hours, little free time, and relentless schedules. The job postings often promise high salaries but require applicants to work long hours, sending photos daily, making video and voice calls, and creating audio and video messages. Some job postings even claim that the company will retain the applicant's passport for visa and work permit management.

    Ling Li, the cofounder of the nonprofit EOS collective which works with victims of the scam industry, warns that even though some models may be recruited to work in these roles and receive more freedoms than victims, they still face harsh treatment from bosses. "One European victim told us that he saw some Italian models in his compound, but he cannot tell [if] they are [there] willingly or not because they were beaten in front of him," Li says.

    A recent investigation by WIRED found that the vast majority of model-job ads and applications on Telegram do not specifically mention scamming work, but they include a host of red flags indicating scamming. Frank McKenna, the chief strategist at anti-fraud software firm Point Predictive, has closely tracked "AI models" and notes that some posts are more explicit, listing a "job market" someone was applying for as: "love scam."

    To understand how these AI models operate, McKenna set up a video call between him and his mom. During the call, he noticed that the young woman on camera appeared to be using an AI filter on her face. This trend has been observed in other interactions with AI models, where it is clear that deepfake technology is being used to manipulate potential scam victims.

    The rise of AI models has significant implications for cybersecurity and online safety. As cybercriminals continue to adopt AI and use face-swapping as part of their online scamming, it is essential to be aware of these tactics and take steps to protect oneself from falling victim to scams.

    In conclusion, the job postings for AI models on Telegram have raised serious concerns about exploitation, coercion, and deepfake technology being used to manipulate potential scam victims. It is essential to be vigilant when encountering such job postings and report any suspicious activity to the relevant authorities.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/AI-Driven-Scams-The-Rise-of-Deepfake-Models-Used-to-Dupe-Victims-Out-of-Their-Money-ehn.shtml

  • https://www.wired.com/story/models-are-applying-to-be-the-face-of-ai-scams/

  • https://itmagazine.com/2026/03/16/100-video-calls-a-day-models-targeted-for-ai-scam-facades/


  • Published: Mon Mar 16 09:05:11 2026 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us