ChatGPT security flaw could open the gate for devastating cyberattack, expert warns


  • A ChatGPT API can be given an unlimited number of URLs, even if they’re duplicates, expert warns
  • If it tries to run the commands, it will create a huge volume of HTTP requests
  • Researchers urge OpenAI to put safeguards in place

Experts have warned there is a way to make OpenAI’s ChatGPT service engage in Distributed Denial of Service (DDoS) attacks on threat actors’ behalf.

A report from cybersecurity researcher Benjamin Flesch noted the problem lies in ChatGPT’s API’s handling of HTTP POST requests to a specific endpoint. That endpoint allows the user to provide a series of links through the “urls” parameter – without any limits.

So, in theory, a threat actor could include thousands of hyperlinks in a single request – all pointing to the same server, or address. As a result, OpenAI’s servers will create a huge volume of HTTP requests to the victim’s website, resulting in a denial of service.

Abusing AI

The solution, according to Flesch, is relatively simple – all OpenAI needs to do is implement stringent limits on the number of URLs a person can submit. The company should also make sure duplicate requests are limited. Finally, by adding rate-limiting measures, potential abuse could be prevented.

This is not the first time people found ways to abuse Generative AI (GenAI) tools, and most likely won’t be the last.

So far, though, miscreants have only focused on abusing the actual tool, not the underlying infrastructure. Security researchers have seen ChatGPT and other similar tools get tricked into writing malware code, generating convincing phishing emails, or instructing how to make an explosive device.

OpenAI, as well as the developers of other tools, have been working hard to include various defense mechanisms, safeguards, and blocks, to prevent the misuse of their GenAI solutions. By large, they have been successful, since the tools will no longer respond favorably to certain requests. However, this has spawned an entirely new sport called “GenAI jailbreaking”, in which hackers compete in bypassing the ethical, safety, and usage restrictions imposed on generative AI systems.

Via SiliconANGLE

You might also like