ChatGPT Used for Making Homemade Explosives by Hacker Called ‘Amadon’, Tricked into Giving Detailed Instructions, Says Report

A hacked called 'Amadon' used the OpenAI's ChatGPT to get a step-by-step instructions on how to make a homemade fertilizer bomb going against its safety guidelines.

featured-image

San Francisco, September 16: OpenAI's ChatGPT has been used for various productive and educational purposes; however, criminals have found ways around the most popular AI chatbot to use it in illegal activities. Recently, a report said that a hacker used ChatGPT to learn how to make homemade bombs. The incident, which sounds like a movie plot, occurred because the hacker was able to bypass ChatGPT's safety measures.

According to a report by TechCrunch , a hacker bypassed the security code to learn about the step-by-step instructions on making homemade explosives. Usually, when any user asks the ChatGPT to help make homemade fertilizer bombs like the ones used in the 1995 Oklahoma City terrorist bombing, the chatbot will refuse by replying, "I can't assist with that." CERT-In Warns Android Users of New Security Flaws; Check Details.



OpenAI's chatbot also provided responses, such as instructions on creating dangerous or illegal items like fertilizer bombs, which was against safety guidelines and ethical responsibilities. It blocked users from accessing such information that would likely put others in harm. However, a hacker named "Amadon" managed to bypass the security, which gave him detailed instructions on making fertilizer bombs.

Amadon got the instructions and called his findings "social engineering" to break all the guidelines around ChatGPT's results. The hacker used a trick with the chatbot by ordering it to "play a game." After that, Amadon used various connecting attempts to create a sci-fi fantasy world where the ChatGPT safety and guidelines would not apply.

It tricked the OpenAI's chatbot to escape the pre-programmed guidelines and restrictions called "jailbreaking". Malicious threat actors could use such information for the wrong purpose of harming someone. Users are advised not to make such attempts to get illegal or bad information.

Samsung, Xiaomi Accused of Secret Agreement With Amazon and Flipkart, Says Report. The report highlighted that ChatGPT even explained the details of how to create explosives to create mines, traps, and IEDs (Improvised Explosive Devices) to Amadon. After that, the hacker kept asking for more details on creating "minefields" and "Claymore-style explosives.

" (The above story first appeared on LatestLY on Sep 16, 2024 10:58 AM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com ).

.