AI Prompt Poisoning Hijacks ‘Summarize with AI’ Buttons to Manipulate Chatbot Answers
Microsoft Defender Security Research uncovered a new “AI Recommendation Poisoning” technique where attackers hide malicious prompts behind the “Summarize with AI” buttons that appear on compromised or malicious web pages. When a user clicks the button, the concealed prompt is silently sent to a large language model (LLM) – such as