Augustus – Open-source LLM Vulnerability Scanner With 210+ Attacks Across 28 LLM Providers

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

Augustus LLM Vulnerability Scanner

Augustus is a new open-source vulnerability scanner designed to secure Large Language Models (LLMs) against an evolving landscape of adversarial threats.

Built by Praetorian, Augustus aims to bridge the gap between academic research tools and production-grade security testing, offering a single-binary solution that can launch over 210 distinct adversarial attacks against 28 LLM providers.

As enterprises race to integrate Generative AI into their products, security teams have struggled with tooling that is often research-oriented, slow, or difficult to integrate into continuous integration/continuous deployment (CI/CD) pipelines.

Existing tools like NVIDIA’s garak have set the standard for comprehensive testing, but rely on complex Python environments and heavy dependencies.

Augustus addresses these operational bottlenecks by being compiled as a single, portable Go binary. This architecture eliminates the “dependency hell” often associated with Python-based security tools, removing the need for virtual environments, pip installs, or specific interpreter versions.

The tool leverages Go’s native concurrency primitives (goroutines) to perform massively parallel scanning, making it significantly faster and more resource-efficient than its predecessors.

“We needed something built for the way our operators work: a fast, portable binary that fits into existing penetration testing workflows,” stated Praetorian in their release announcement.

210+ Attack Modes

At its core, Augustus is an attack engine that automates the “red teaming” of AI models. It ships with a library of 210+ vulnerability probes across 47 attack categories, including:

  • Jailbreaks: Sophisticated prompts designed to bypass safety filters (e.g., DAN, AIM, and “Grandma” exploits).
  • Prompt Injection: Techniques to override system instructions, including encoding bypasses like Base64, ROT13, and Morse code.
  • Data Extraction: Tests for PII leakage, API key disclosure, and training data reconstruction.
  • Adversarial Examples: Gradient-based attacks and logic bombs designed to confuse model reasoning.

A standout feature of Augustus is its “Buff” system, which allows security testers to apply transformations to any probe dynamically. Testers can chain multiple “buffs,” such as paraphrasing a prompt, translating it into a low-resource language (e.g., Zulu or Scots Gaelic), or encoding it in poetic formats, to test whether a model’s safety guardrails hold up against obfuscated inputs.

This capability is critical for uncovering “fragile” safety filters that may block a standard attack but fail to recognize the same attack when slightly altered.

Designed for the modern security stack, Augustus supports 28 LLM providers out of the box, including major platforms such as OpenAI, Anthropic, Azure, AWS Bedrock, and Google Vertex AI, as well as local inference engines such as Ollama.

This broad support ensures that teams can test everything from cloud-hosted GPT-4 models to locally running Llama 3 instances with the same tooling.

The tool’s architecture emphasizes production reliability, featuring built-in rate limiting, retry logic, and timeout handling to prevent scan failures during large-scale assessments.

Results can be exported in multiple formats, including JSON, JSONL for streaming logs, and HTML for stakeholder reporting, making it easy to ingest vulnerability data into vulnerability management platforms or SIEMs.

Augustus is the second release in Praetorian’s “12 Caesars” open-source series, following the release of the LLM fingerprinting tool Julius. It is available immediately under the Apache 2.0 license.

Security professionals and developers can download the latest release or build from source via GitHub.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.