AI Prompt Poisoning Hijacks ‘Summarize with AI’ Buttons to Manipulate Chatbot Answers
Microsoft Defender Security Research uncovered a new “AI Recommendation Poisoning” technique where attackers hide malicious prompts behind the “Summarize with AI” buttons that appear on compromised or malicious web pages. When a user clicks the button, the concealed prompt is silently sent to a large language model (LLM) – such as Bing Chat – steering the model to generate answers that appear legitimate but are deliberately biased or deceptive.
The poisoned prompts can be used to inject misinformation, promote phishing links, or manipulate user decisions, effectively turning a benign UI element into a covert influence vector. Defenders must treat these UI components as part of the attack surface, monitor for anomalous prompt patterns, enforce strict content‑sanitization policies, and audit LLM interactions to prevent malicious prompt injection from compromising the integrity of chatbot responses.
Categories: Threat Intelligence, AI Security & Threats, Malware & Ransomware
Source: Read original article
Comments ()