Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

North Korea's "Keyboard Warriors" Turn to Deepfakes to Evade Detection by Cybersecurity Tools




North Korea's cybercrime squads have turned to deepfakes as a new tactic to evade detection by cybersecurity tools. The use of these deepfakes has been linked to a recent spear-phishing attack against a South Korean defense-related institution, with researchers attributing the attack to Kimsuky, a notorious North Korean cybercrime squad. As AI-powered cybercrime tools become more sophisticated, it is essential for organizations to stay vigilant and adapt their security measures to counter these emerging threats.

  • North Korean hackers are using deepfakes to generate fake military IDs.
  • A notorious cybercrime squad, Kimsuky, was caught using this tactic in an espionage campaign against a South Korean defense-related institution.
  • ChatGPT's image tools were allegedly used by the attackers to evade detection by cybersecurity tools.
  • The faked ID photo was created using publicly available headshots and composited into a template resembling a South Korean military employee card.
  • The use of deepfakes highlights the need for clear guidelines and regulations to govern the use of AI services for sensitive purposes.



  • North Korean hackers have long been known for their cunning and creative approaches to evading detection by cybersecurity tools. In recent months, a new tactic has emerged that is both disturbing and ingenious: the use of deepfakes to generate fake military IDs. This latest development comes as part of a broader trend of North Korea's cybercrime squads using advanced technologies such as artificial intelligence (AI) to carry out their nefarious activities.

    The most recent example of this tactic was uncovered by researchers at the Genians Security Center (GSC), a South Korean security institute. According to GSC, a notorious cybercrime squad known as Kimsuky used a deepfake-based forgery to create a fake military ID for use in an espionage campaign against a South Korean defense-related institution. The image of the ID was created using ChatGPT's image tools, which were allegedly used by the attackers to evade detection by cybersecurity tools.

    The faked ID photo was based on publicly available headshots and composited into a template resembling a South Korean military employee card. The researchers believe that the attackers likely used prompt-engineering tricks – framing the request as the creation of a "sample design" or "mock-up" for legitimate use – to get around ChatGPT's built-in refusals to generate government ID replicas.

    "It is illegal to produce copies in identical or similar form of legally protected identification documents," said Genians. "When prompted to generate such an ID copy, ChatGPT returns a refusal, but the model's response can vary depending on the prompt or persona role settings." This has led to extra caution being required when using AI services for sensitive purposes.

    The deepfake was distributed to targets in emails disguised as correspondence about ID issuance for military-affiliated officials. Targets included an unnamed defense-related institution in South Korea, though Genians stopped short of naming victims, and didn't say how many organizations were targeted.

    This latest development highlights the evolving threat landscape that cybersecurity professionals face. As AI-powered cybercrime tools become more sophisticated, it is essential for organizations to stay vigilant and adapt their security measures to counter these emerging threats.

    OpenAI has been working to block the creation of counterfeit IDs using its chatbot service. In February, the company said it had booted dozens of accounts tied to North Korea's overseas IT worker schemes as part of a broader effort to spot and disrupt state-backed misuse of its models.

    The use of deepfakes in this context also underscores the importance of awareness and education among users of AI-powered services. As Genians noted, "since military government employee IDs are legally protected identification documents, producing copies in identical or similar form is illegal." This highlights the need for clear guidelines and regulations to govern the use of AI services for sensitive purposes.

    In conclusion, North Korea's "keyboard warriors" have turned to deepfakes as a new tactic to evade detection by cybersecurity tools. This latest development underscores the evolving threat landscape that cybersecurity professionals face and highlights the importance of awareness and education among users of AI-powered services. As the threat landscape continues to evolve, it is essential for organizations to stay vigilant and adapt their security measures to counter emerging threats.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/North-Koreas-Keyboard-Warriors-Turn-to-Deepfakes-to-Evade-Detection-by-Cybersecurity-Tools-ehn.shtml

  • https://go.theregister.com/feed/www.theregister.com/2025/09/15/north_korea_chatgpt_fake_id/

  • https://en.wikipedia.org/wiki/Kimsuky

  • https://cyberpress.org/kimsuky-cyberattack/


  • Published: Mon Sep 15 08:51:54 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us