Ethical Hacking News
A high-severity security flaw in LMDeploy has been exploited by attackers within less than 13 hours of its public disclosure. The vulnerability, tracked as CVE-2026-33626 (CVSS score: 7.5), relates to a Server-Side Request Forgery (SSRF) vulnerability that could be exploited to access sensitive data. Learn more about this developing threat and how it can impact your organization's security posture.
A high-severity vulnerability (CVE-2026-33626, CVSS score: 7.5) in LMDeploy has been exploited in the wild less than 13 hours after its public disclosure.The vulnerability is related to a Server-Side Request Forgery (SSRF) that could be exploited to access sensitive data.Successful exploitation of the vulnerability could permit an attacker to steal cloud credentials, reach internal services, port scan internal networks, and create lateral movement opportunities.The flaw affects all versions of LMDeploy with vision language support.A cloud security firm detected an exploitation attempt within 12 hours and 31 minutes of the vulnerability being published.
The cybersecurity landscape has been hit with a new vulnerability that showcases the growing threat of shadow AI attacks. According to recent reports, a high-severity security flaw in LMDeploy, an open-source toolkit for compressing, deploying, and serving large language models (LLMs), has come under active exploitation in the wild less than 13 hours after its public disclosure.
The vulnerability, tracked as CVE-2026-33626 (CVSS score: 7.5), relates to a Server-Side Request Forgery (SSRF) vulnerability that could be exploited to access sensitive data. This flaw was discovered by Orca Security researcher Igor Stepansky and published in an advisory last week.
The shortcoming affects all versions of the toolkit (0.12.0 and prior) with vision language support. Successful exploitation of the vulnerability could permit an attacker to steal cloud credentials, reach internal services that aren't exposed to the internet, port scan internal networks, and create lateral movement opportunities.
Cloud security firm Sysdig reported detecting the first LMDeploy exploitation attempt against its honeypot systems within 12 hours and 31 minutes of the vulnerability being published on GitHub. The exploitation attempt originates from the IP address 103.116.72[.]119.
The actions undertaken by the adversary, detected on Apr 22, 2026, at 03:35 a.m. UTC, unfolded over 10 distinct requests across three phases, with the requests switching between vision language models (VLMs) such as internlm-xcomposer2 and OpenGVLab/InternVL2-8B to likely avoid raising any suspicion.
The attacker targeted AWS Instance Metadata Service (IMDS) and Redis instances on the server. They also tested egress with an out-of-band (OOB) DNS callback to requestrepo[.]com to confirm the SSRF vulnerability can reach arbitrary external hosts, followed by enumerating the API surface.
Furthermore, the adversary port scanned the loopback interface ("127.0.0[.]1") to expand their access range and assess potential weaknesses.
LMDeploy's vision-language module was found to be vulnerable due to a lack of validation for internal/private IP addresses. This allowed attackers to access cloud metadata services, internal networks, and sensitive resources. The bug highlights the need for improved security measures in AI-infrastructure tools.
The rapid exploitation of this vulnerability serves as a stark reminder of the dangers of shadow AI attacks. As threat actors continue to monitor new vulnerability disclosures and exploit them before downstream users can apply fixes, it is imperative that organizations prioritize cybersecurity and implement robust defenses against such threats.
In conclusion, the LMDeploy flaw exposed within 13 hours of disclosure underscores the importance of maintaining a vigilant security posture in today's rapidly evolving threat landscape. As AI technology advances, so too must our efforts to safeguard against its potential misuse.
Related Information:
https://www.ethicalhackingnews.com/articles/The-Shadow-AI-Threat-LMDeploy-Flaw-Exposed-Within-13-Hours-of-Disclosure-ehn.shtml
https://thehackernews.com/2026/04/lmdeploy-cve-2026-33626-flaw-exploited.html
https://cyberpress.org/new-lmdeploy-vulnerability/
https://nvd.nist.gov/vuln/detail/CVE-2026-33626
https://www.cvedetails.com/cve/CVE-2026-33626/
Published: Fri Apr 24 03:45:12 2026 by llama3.2 3B Q4_K_M