Study Uncovers How ChatGPT Search Can Mislead Users

Study Uncovers How ChatGPT Search Can Mislead Users

Recent research has uncovered vulnerabilities in ChatGPT’s search tool, revealing that it can be manipulated to mislead users. Security experts tested scenarios where the AI responded to queries about products listed on fake websites. By embedding hidden text within the pages, researchers influenced ChatGPT to generate falsely positive reviews—even when genuine content included negative feedback .

Jacob Larsen, a cybersecurity expert at CyberCX, warned that malicious actors could exploit this flaw, potentially building deceptive websites aimed at manipulating AI-generated summaries .

The study also highlighted broader risks with large language models (LLMs) in search tools. For example, adversaries injected malicious code into programming queries, causing financial losses, such as a reported case involving $2,500 theft via manipulated cryptocurrency code .

Karsten Nohl, a cybersecurity scientist, emphasized that AI tools like ChatGPT should function as “co-pilots” rather than definitive sources of truth, given their tendency to trust content without critical analysis .

OpenAI acknowledged these challenges by including disclaimers about potential inaccuracies. Researchers stress the need for rigorous testing before expanding such tools to broader audiences. Comparisons were also drawn to “SEO poisoning,” where hackers manipulate search rankings to spread malware, underscoring the ongoing arms race between AI security and malicious actors .