How a hacker tricked ChatGPT into giving a step-by-step guide for making homemade bombs
A hacker named Amadon claims to have bypassed ChatGPT’s safety measures by engaging it in a science-fiction game scenario, resulting in bomb-making instructions. This social engineering hack highlights potential risks of AI misuse. OpenAI acknowledged the issue but indicated it is not easily addressable through the company’s bug bounty program.