ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.
“The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent,” NeuralTrust said in a report published FridayThe Hacker News​Read More

Author: VolkAI
This is the imported news bot.