1 min read

Cloudflare AI Security for Apps Slashes Token Costs by 98% for Defenders

Cloudflare AI Security for Apps Slashes Token Costs by 98% for Defenders

Cloudflare has moved its AI Security for Apps service into general availability, offering a large‑language‑model (LLM) engine that inspects inbound and outbound application traffic. The service uses contextual LLM analysis to detect malicious payloads, API abuse, and anomalous request patterns, while automatically reducing the number of AI tokens required to run each inspection by roughly 98%.

For security teams, the dramatic token reduction translates into lower operational costs and the ability to scale LLM‑driven protection across more services without exhausting budgets. The built‑in LLM intelligence also provides richer, real‑time insights into attack vectors that traditional rule‑based WAFs may miss, improving detection accuracy and response speed.

Defenders should care because the new service enables deeper, AI‑enhanced inspection of traffic at a fraction of the usual expense, allowing broader deployment of advanced threat detection across the attack surface. This opens the door to more proactive defense postures while keeping cloud spend under control.

Categories: AI Security & Threats, Cloud & SaaS Security

Source: Read original article