Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

Microsoft's AI Security Head Accidentally Reveals Walmart's Private AI Plans Amid Pro-Palestine Protest Disruptions


Microsoft's head of AI security accidentally revealed confidential information about Walmart's private AI plans during a presentation on best security practices for AI at the company's annual Build conference. The incident occurred amidst disruptions by protests denouncing Microsoft's ties with Israel, highlighting the challenges faced by tech workers who engage in activism against their employers.

  • Microsoft's head of AI security, Neta Haiby, accidentally revealed confidential information about Walmart's private AI plans during a presentation.
  • Protests organized by "No Azure for Apartheid" disrupted the Microsoft Build conference, denouncing the company's ties with Israel and its use of AI in military infrastructure.
  • A former Microsoft software engineer, Hossam Nasr, shouted at Haiby, accusing Microsoft of whitewashing its role in supporting genocide in Palestine.
  • The incident has sparked renewed attention on the role of technology companies in supporting military conflicts and the need for greater transparency and accountability in AI use.
  • Microsoft has faced criticism for its $133 million contract with Israel and its use of AI in military infrastructure, with some calling for greater regulation and oversight of AI use in military contexts.



  • In a shocking turn of events, Microsoft's head of AI security, Neta Haiby, accidentally revealed confidential information about Walmart's private AI plans during a presentation on best security practices for AI at the company's annual Build conference. The incident occurred amidst disruptions by protests denouncing Microsoft's ties with Israel.

    The protests, which were organized by the group No Azure for Apartheid, aimed to draw attention to Microsoft's $133 million contract with Israel and its use of AI in military infrastructure. The group also disrupted Microsoft's 50th Anniversary celebrations and interrupted the company's head of CoreAI, Jay Parikh, during his keynote.

    Despite efforts by Microsoft to ward off the protests, including announcing that internal and external reviews found no evidence that its products have harmed people in Gaza, the situation escalated further when Hossam Nasr, a former Microsoft software engineer and one of the protest organizers, shouted at Haiby, "Sarah, you are whitewashing the crimes of Microsoft in Palestine, how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine!" The outburst was met with support from other protesters, who were ultimately removed from the conference.

    In the aftermath of the protests, Haiby accidentally switched to Microsoft Teams while sharing her screen, revealing messages about Walmart's plan to expand its use of AI. In one message, a Microsoft cloud solution architect wrote, "Walmart is ready to rock and roll with Entra Web and AI Gateway." Another quoted a Walmart AI engineer saying, "Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you!"

    The incident has sparked renewed attention on the role of technology companies in supporting military conflicts and the need for greater transparency and accountability in their use of AI. It has also highlighted the challenges faced by tech workers who engage in activism against their employers, with both Nasr and another protester being fired for previous demonstrations.

    No Azure for Apartheid released a statement asserting that Microsoft "provides the technological backbone of Israel's genocidal war machine." The organization's breakdown of Microsoft's "blatant lie" includes a +972 Magazine report that found the company has a footprint in all major military infrastructures and AP News' findings that Azure is used to compile information gathered through mass surveillance of Palestinians.

    In response to the incident, some have raised concerns about the potential risks of AI being used by companies with questionable human rights records. Others have highlighted the need for greater regulation and oversight of the use of AI in military contexts.

    As the situation continues to unfold, it remains to be seen how Microsoft will respond to the revelations about Walmart's private AI plans and its own role in supporting Israel's military efforts. One thing is certain, however: the incident has brought attention to the complex and often fraught relationship between technology companies, AI, and human rights.



    Related Information:
  • https://www.ethicalhackingnews.com/articles/Microsofts-AI-Security-Head-Accidentally-Reveals-Walmarts-Private-AI-Plans-Amid-Pro-Palestine-Protest-Disruptions-ehn.shtml

  • https://gizmodo.com/microsofts-head-of-ai-security-accidentally-reveals-walmarts-private-ai-plans-after-pro-palestine-protest-2000605389


  • Published: Wed May 21 12:37:41 2025 by llama3.2 3B Q4_K_M













    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us