Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Data Loss Prevention for Generative AI: A New Frontier in Network Security



New research from Fidelis Network Detection and Response (NDR) highlights the growing threat of generative AI data breaches. As AI-powered platforms become increasingly prevalent in organizations, traditional DLP solutions often fail to address emerging threats. A new network-based data loss prevention solution is required to tackle these challenges effectively. Learn more about how Fidelis NDR can help you manage GenAI usage and protect your organization's sensitive information.

  • Fidelis Network Detection and Response (NDR) has developed a new solution to tackle the growing threat of generative AI data breaches.
  • DLP solutions often fail to address emerging security challenges posed by AI-powered platforms like ChatGPT, Gemini, Copilot, and Claude.
  • Fidelis NDR introduces a network-based DLP solution that brings AI activity under control, focusing on visibility across the entire traffic path.
  • Data loss prevention for generative AI requires a shift in focus from endpoint and siloed channel monitoring to network-based monitoring.
  • Organizations must develop a comprehensive strategy to monitor and manage GenAI usage effectively.
  • Fidelis NDR offers two approaches: URL-based indicators, real-time alerts, and metadata-only monitoring for audit and low-noise environments.
  • The solution provides real-time notifications, comprehensive forensic analysis, and integration with incident response playbooks and SIEM/SOC tools.
  • It also reduces false positives, operational fatigue, and enables long-term trend analysis and audit or compliance reporting.



  • Fidelis Network Detection and Response (NDR) has recently developed a new solution to tackle the growing threat of generative AI data breaches. As AI-powered platforms like ChatGPT, Gemini, Copilot, and Claude become increasingly prevalent in organizations, they also introduce new security challenges that traditional DLP solutions often fail to address. Sensitive information may be shared through chat prompts, files uploaded for AI-driven summarization, or browser plugins that bypass familiar security controls.

    Standard DLP products are not equipped to handle these emerging threats, as they typically focus on endpoint and siloed channel monitoring. In contrast, Fidelis NDR introduces a network-based data loss prevention solution that brings AI activity under control. This approach allows teams to monitor, enforce policies, and audit GenAI use as part of a broader data loss prevention strategy.

    Data loss prevention for generative AI requires a shift in focus from endpoint and siloed channel monitoring to visibility across the entire traffic path. Unlike earlier tools that rely on scanning emails or storage shares, NDR technologies like Fidelis identify threats as they traverse the network, analyzing traffic patterns even if the content is encrypted.

    The critical concern is not just who created the data but when and how it leaves the organization's control, whether through direct uploads, conversational queries, or integrated AI features in business systems. Organizations must develop a comprehensive strategy to monitor and manage GenAI usage effectively.

    Fidelis NDR offers two complementary approaches for monitoring generative AI usage: URL-based indicators and real-time alerts, and metadata-only monitoring for audit and low-noise environments.

    The first approach involves defining indicators for specific GenAI platforms, such as ChatGPT. These rules can be applied to multiple services and tailored to relevant departments or user groups. Monitoring can run across web, email, and other sensors. When a user accesses a GenAI endpoint, Fidelis NDR generates an alert. If a DLP policy is triggered, the platform records a full packet capture for subsequent analysis. Web and mail sensors can automate actions, such as redirecting user traffic or isolating suspicious messages.

    The advantages of this approach include real-time notifications that enable prompt security response, comprehensive forensic analysis as needed, and integration with incident response playbooks and SIEM or SOC tools.

    However, maintaining up-to-date rules is necessary as AI endpoints and plugins change. High GenAI usage may require alert tuning to avoid overload.

    In contrast, the second approach involves recording activity as metadata, creating a searchable audit trail with minimal disruption. Alerts are suppressed, and all relevant session metadata is retained. Sessions log source and destination IP, protocol, ports, device, and timestamps. Security teams can review all GenAI interactions historically by host, group, or time frame.

    The benefits of this approach include reduced false positives and operational fatigue for SOC teams, as well as enabling long-term trend analysis and audit or compliance reporting.

    Overall, data loss prevention for generative AI requires a comprehensive strategy that includes network-based monitoring, real-time alerts, and metadata-only recording. By adopting a solution like Fidelis NDR, organizations can effectively manage the risks associated with GenAI usage and maintain the security and integrity of their sensitive information.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Data-Loss-Prevention-for-Generative-AI-A-New-Frontier-in-Network-Security-ehn.shtml

  • https://thehackernews.com/2025/08/can-your-security-stack-see-chatgpt-why.html

  • https://www.forcepoint.com/blog/insights/chatgpt-security-risks-best-practices


  • Published: Fri Aug 29 08:00:12 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us