Agentic Hardening

πŸ›‘οΈ A curated list of tools, papers, frameworks, and best practices for hardening agentic AI systems.

From Threats to Governance

Organized along the "Attack Surface β†’ Hardening β†’ Evaluation β†’ Governance" pipeline, covering three authoritative source families.

⚠️

Attack Surface

Know your enemy β€” understand the threats facing agentic AI systems

↓
πŸ”’

Hardening Techniques

Proactive defense β€” reduce the attack surface of your systems

↓
πŸ”

Evaluation & Testing

Measure and validate β€” ensure defenses actually work

↓
πŸ“‹

Governance & Standards

Institutional guardrails β€” policies, standards, and compliance

Source Coverage
OWASP Agentic Top 10 (2026) All ASI01–ASI10 risk items
arXiv Academic Surveys 5 threat categories + 4 defense categories
NIST / McKinsey / CSA Full governance coverage

12 Categories, 4 Groups

A comprehensive taxonomy covering the full lifecycle of agentic AI security.

Threat β†’ Defense

Each threat category maps to specific hardening techniques. Here's how they connect.

Threats
πŸ’‰ Prompt Injection & Jailbreaks
πŸ”§ Tool Misuse & Exploitation
🧠 Memory & Context Poisoning
🌐 Multi-Agent Protocol Threats
πŸ”‘ Identity & Supply Chain
Hardening
πŸ›‘οΈ Prompt Hardening & Sanitization
πŸ“¦ Runtime Sandboxing & Confinement
πŸ“‘ Detection & Observability
πŸ”— Protocol Hardening
πŸ“‹ Governance & Standards

Help Build the List

Agentic Hardening is community-driven. Every contribution helps the entire ecosystem.

🌟

Star & Share

Star the repo on GitHub to help others discover it, and share with your network.

Star on GitHub
πŸ“

Submit Resources

Found a paper, tool, or framework? Open a PR following our contribution guidelines.

Contributing Guide
πŸ›

Report Issues

Spot a broken link, outdated entry, or missing category? Let us know via GitHub Issues.

Open an Issue