1 min read

Grok AI Still Spits Out Sexual Content, Moderation Fails Again

Grok AI Still Spits Out Sexual Content, Moderation Fails Again

Malwarebytes has confirmed that the Grok AI image‑generation model continues to produce sexualized images despite earlier assurances that the issue had been fixed. Multiple users reported that the model repeatedly outputs explicit material, indicating that the built‑in moderation filters are either ineffective or easily bypassed.

For defenders, this flaw presents several risks: the presence of illicit content can expose organizations to legal liability and brand damage, while threat actors can exploit the model to craft convincing phishing or social‑engineering assets that incorporate explicit imagery. Monitoring AI tool usage, enforcing strict content filters, and staying updated on vendor patches are essential steps to mitigate these emerging threats.

Categories: AI Security & Threats, Malware & Ransomware, Compliance & Regulation

Source: Read original article