Security

AI ‘Hallucinations’ Can Become an Enterprise Security Nightmare

Researchers at an Israeli security firm on Tuesday revealed how hackers could turn a generative AI’s “hallucinations” into a nightmare for an organization’s software supply chain.

In a blog post on the Vulcan Cyber website, researchers Bar Lanyado, Ortel Keizman, and Yair Divinsky illustrated how one could exploit false information generated by ChatGPT about open-source software packages to deliver malicious code into a development environment.

They explained that they’ve seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist.

If ChatGPT is fabricating code libraries or packages, attackers could use these hallucinations to spread malicious packages without using suspicious and already detectable techniques like typosquatting or masquerading, they noted.

If an attacker can create a package to replace the “fake” packages recommended by ChatGPT, the researchers continued, they might be able to get a victim to download and use it.

The likelihood of that scenario occurring is increasing, they maintained, as more and more developers migrate from traditional online search domains for code solutions, like Stack Overflow, to AI solutions, like ChatGPT.

Already Generating Malicious Packages

“The authors are predicting that as generative AI becomes more popular, it will start receiving developer questions that once would go to Stack Overflow,” explained Daniel Kennedy, research director for information security and networking at 451 Research, which is part of S&P Global Market Intelligence, a global market research company.

“The answers to those questions generated by the AI may not be correct or may refer to packages that no longer or never existed,” he told TechNewsWorld. “A bad actor observing that can create a code package in that name to include malicious code and have it continually recommended to developers by the generative AI tool.”

“The researchers at Vulcan took this a step further by prioritizing the most frequently asked questions on Stack Overflow as the ones they would put to the AI, and see where packages that don’t exist were recommended,” he added.

According to the researchers, they queried Stack Overflow to get the most common questions asked about more than 40 subjects and used the first 100 questions for each subject.

Then, they asked ChatGPT, through its API, all the questions they had collected. They used the API to replicate an attacker’s approach to getting as many non-existent package recommendations as possible in the shortest time.

In each answer, they looked for a pattern in the package installation command and extracted the recommended package. They then checked to see if the recommended package existed. If it didn’t, they tried to publish it themselves.

Kludging Software

Malicious packages generated with code from ChatGPT have already been spotted on package installers PyPI and npm, noted Henrik Plate, a security researcher at Endor Labs, a dependency management company in Palo Alto, Calif.

“Large language models can also support attackers in the creation of malware variants that implement the same logic but have different form and structure, for example, by distributing malicious code across different functions, changing identifiers, generating fake comments and dead code or comparable techniques,” he told TechNewsWorld.

The problem with software today is that it is not independently written, observed Ira Winkler, chief information security officer at CYE, a global cybersecurity optimization platform maker.

“It is basically kludged together from lots of software that already exists,” he told TechNewsWorld. “This is very efficient, so a developer does not have to write a common function from scratch.”

However, that can result in developers importing code without properly vetting it.

“Users of ChatGPT are receiving instructions to install open-source software packages that can install a malicious package while thinking it is legitimate,” said Jossef Harush, head of software supply chain security at Checkmarx, an application security company in Tel Aviv, Israel.

“Generally speaking,” he told TechNewsWorld, “the culture of copy-paste-execute is dangerous. Doing so blindly from sources like ChatGPT may lead to supply chain attacks, as the Vulcan research team demonstrated.”

Know Your Code Sources

Melissa Bischoping, director of endpoint security research at Tanium, a provider of converged endpoint management in Kirkland, Wash., also cautioned about loose use of third-party code.

“You should never download and execute code you don’t understand and haven’t tested by just grabbing it from a random source — such as open source GitHub repos or now ChatGPT recommendations,” she told TechNewsWorld.

“Any code you intend to run should be evaluated for security, and you should have private copies of it,” she advised. “Do not import directly from public repositories, such as those used in the Vulcan attack.”

She added that attacking a supply chain through shared or imported third-party libraries isn’t novel.

“Use of this strategy will continue,” she warned, “and the best defense is to employ secure coding practices and thoroughly test and review code — especially code developed by a third party — intended for use in production environments.”

“Don’t blindly trust every library or package you find on the internet or in a chat with an AI,” she cautioned.

Know the provenance of your code, added Dan Lorenc, CEO and co-founder of Chaingard, a maker of software supply chain security solutions in Seattle.

“Developer authenticity, verified through signed commits and packages, and getting open source artifacts from a source or vendor you can trust are the only real long-term prevention mechanisms on these Sybil-style attacks on open source,” he told TechNewsWorld.

Early Innings

Authenticating code, though, isn’t always easy, noted Bud Broomhead, CEO of Viakoo, a developer of cyber and physical security software solutions in Mountain View, Calif.

“In many types of digital assets — and in IoT/OT devices in particular — firmware still lacks digital signing or other forms of establishing trust, which makes exploits possible,” he told TechNewsWorld.

“We are in the early innings of generative AI being used for both cyber offense and defense. Credit to Vulcan and other organizations that are detecting and alerting on new threats in time for the language learning models to be tuned towards preventing this form of exploit,” he added.

“Remember,” he continued, “it was only a few months ago that I could ask Chat GPT to create a new piece of malware, and it would. Now it takes very specific and directed guidance for it to create it inadvertently. And hopefully, even that approach will soon be prevented by the AI engines.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Security

E-Commerce Times Channels