AI Prompt Poisoning Hijacks ‘Summarize with AI’ Buttons on Webpages
Microsoft Defender Security Research uncovered a new “AI Recommendation Poisoning” attack that leverages hidden prompts attached to “Summarize with AI” buttons on compromised websites. When a user clicks the button, the malicious script injects a specially crafted prompt into the large‑language‑model (LLM) request, steering the chatbot to produce answers that appear legitimate but contain false or biased information favorable to the attacker.
The manipulated responses can be used to spread misinformation, facilitate phishing, or coerce users into disclosing credentials by presenting convincing but deceptive guidance. Defenders must treat these UI elements as attack surfaces: enforce strict content‑security policies, sanitize any third‑party scripts that can modify LLM prompts, and monitor for abnormal LLM output patterns that indicate prompt tampering. Updating web filters, conducting regular site integrity scans, and educating users about the risk of unchecked “AI‑summarize” features are essential steps to mitigate this emerging threat.
Source: Read original article
Member discussion