| Follow @EthHackingNews |
AI-generated passwords are woefully inadequate at security, according to a recent study conducted by AI security company Irregular. The researchers found that even when seemingly complex and secure passwords were generated by prominent LLMs like ChatGPT, Gemini 3 Flash, and Claude, they exhibited common patterns that could be easily cracked by hackers. This highlights the need for users to review any passwords generated using these models and take a more active role in securing their digital identity.
In an era where technology is advancing at a breakneck pace, it's becoming increasingly evident that the next major vulnerability in our online security lies not in the networks we use to connect to the internet, nor in the devices we rely on to access information. Rather, the greatest threat might just be our very own minds when it comes to generating complex passwords. Researchers from AI security company Irregular recently discovered a disturbing truth about large language models (LLMs) used for generating strong passwords - they are, by and large, woefully inadequate at doing so.
Irregular conducted an exhaustive study involving three prominent LLMs: ChatGPT, Gemini 3 Flash, and Claude. The objective of the research was to assess these models' ability to generate secure, complex passwords with special characters, numbers, and letters in various cases. Their results shed light on a disturbing reality - even when seemingly robust passwords were generated by these AI tools, they exhibited common patterns that would make them easy prey for hackers.
Irregular ran the Opus 4.6 model of ChatGPT, prompting it fifty times to generate different passwords each time. The outputs revealed only thirty unique strings (20 duplicates and 18 identical), with a considerable portion starting and ending in similar characters. This lack of randomness not only compromised the security of these generated passwords but also pointed out a fundamental weakness within LLMs themselves - their tendency to produce predictable, yet plausible outputs that fail to adhere to true randomness.
The researchers took further steps by examining other prominent LLMs, including OpenAI's GPT-5.2 and Google's Gemini 3 Pro. The results were consistent with the first model examined: all three tools consistently produced passwords with common patterns at their core. In the case of Gemini 3 Pro, a version of the AI was available that allowed users to generate either highly complex alphanumeric combinations or something 'randomized.' However, it wasn't long before these randomized options displayed even more predictable sequences than those initially generated.
The team used statistical methods to estimate the entropy (a measure of randomness) in the generated passwords. According to their calculations, LLM-generated 16-character entropies were significantly lower than what a truly random password would have - with an estimated 27 bits versus 98 bits for a fully secure one. This disparity meant that even using decades-old computers, hackers could easily crack these AI-generated passwords within a few hours.
The implications of this study extend far beyond just the realm of password security. The fact that LLMs are so adept at producing predictable patterns highlights the need for developers to scrutinize any code generated by these tools. It also underscores the idea that while technology can help us, it is our responsibility as humans to ensure its use is not detrimental to our own safety and privacy.
Finally, in light of Dario Amodei's earlier remarks suggesting AI will be writing most of all code, this study brings forward a critical warning about the security of these tools. The conclusion drawn by Irregular underscores that no amount of prompting or temperature adjustment can change the fundamental nature of LLMs' output - they are designed to generate plausible but not secure outputs.
The need for users and developers alike to be vigilant about AI-generated passwords has never been more apparent. We can no longer afford to rely on these models for something as crucial as securing our digital identity. As we move forward into a future where technology is increasingly intertwined with human capabilities, understanding the vulnerabilities within these tools is paramount.
As such, Irregular's study serves as a vital reminder - AI security is not merely an afterthought but rather a cornerstone of any well-designed system that aims to safeguard our online presence. For now, the burden of generating secure passwords falls squarely on our shoulders as individuals and developers, necessitating our full attention and vigilance in the face of this new challenge.
Ultimately, it is time for us all to rethink how we approach password generation and accept that true randomness is no longer something AI models can reliably provide. Only through a concerted effort from across the tech industry will we be able to safeguard ourselves against the growing threat posed by these seemingly robust yet ultimately flawed tools.
| Follow @EthHackingNews |