Ethical Hacking News
HackerOne has clarified its stance on generative AI after researchers raised concerns about their submissions being used to train the platform's models. The company assures that it does not use researcher submissions for training its AI agents and emphasizes the integrity of its data usage practices. As the use of AI in security continues to grow, transparency and accountability are crucial components of this emerging landscape.
HackerOne has updated its artificial intelligence (AI) policy in response to researcher concerns about the use of vulnerabilities discovered by security professionals. The company's AI-powered Platform-as-a-Service (PTaaS), Agentic, was launched last month and raises questions about data usage. Researchers expressed concern that their submissions might be used to train the AI models, sparking debates over ethics. HackerOne CEO Kara Sprague published a statement clarifying that researcher submissions are not used for training AI models and customer data is protected. The incident highlights the importance of transparency regarding data usage in AI development and the need for organizations to communicate their policies clearly.
HackerOne, a prominent bug bounty platform, has recently made headlines for its update to its artificial intelligence (AI) policy. The news came about after researchers expressed concerns that their submissions were being used to train the company's models, sparking debates over the ethics of using vulnerabilities discovered by security researchers.
The AI-powered Platform-as-a-Service (PTaaS), known as Agentic, was launched last month by HackerOne, touting its ability to deliver continuous security validation through autonomous agent execution combined with elite human expertise. The agents are trained and refined using proprietary exploit intelligence informed by years of testing real enterprise systems.
However, this development raised questions among researchers regarding the origin of data used to train the agents. Some expressed their concern publicly, including @YShahinzadeh, a former H1 hunter who asked whether HackerOne had used his reports to train its AI agents. Another researcher, @AegisTrail, struck a cautionary note, suggesting that when white hats (security researchers) feel the legal system is rigged against them, they may turn to the "dark side," implying an increase in malicious activities.
To address these concerns directly and unambiguously, HackerOne CEO Kara Sprague published a statement on LinkedIn. She asserted that HackerOne does not train generative AI models internally or through third-party providers using researcher submissions or customer confidential data. Furthermore, she stated that researcher submissions are not used to "train, fine-tune, or otherwise improve generative AI models." Additionally, third-party model providers are prohibited from retaining or utilizing researcher or customer data for their own model training purposes.
Hai, HackerOne's agentic AI system, was designed with the intention of accelerating outcomes such as validated reports, confirmed fixes, and paid rewards while preserving the integrity and confidentiality of researcher contributions. CEO Sprague further assured researchers that Hai is meant to complement their work rather than replace it, suggesting a collaborative approach to enhancing security.
This clarification from HackerOne has sparked discussions around the ethics of using AI in security research and the importance of transparency regarding data usage. The company's stance underscores its commitment to maintaining high standards while embracing emerging technologies like AI to bolster security measures.
The incident highlights the complexities of balancing innovation with accountability, particularly when dealing with sensitive areas like AI development and data usage. As concerns about AI continue to grow, it is essential for organizations like HackerOne to clearly communicate their policies and practices to ensure trust among researchers and customers alike.
In conclusion, HackerOne's recent update to its AI policy marks a significant step in addressing researcher concerns regarding the use of vulnerabilities discovered by security professionals. The company's clarification provides assurances that their AI models are not trained on compromised data from researchers or customer sources, ensuring a collaborative approach to improving security.
Related Information:
https://www.ethicalhackingnews.com/articles/HackerOne-Clarifies-AI-Policy-Amidst-Researcher-Concerns-Over-Exploiting-Vulnerabilities-ehn.shtml
https://go.theregister.com/feed/www.theregister.com/2026/02/18/hackerone_ai_policy/
Published: Thu Feb 19 04:16:12 2026 by llama3.2 3B Q4_K_M