AI Governance

Guardrails, content safety, policy, and AI governance controls.

9 tools in current dataset

Cloud vendor tools

Source-backed side-by-side comparison for the three cloud vendor offerings in this category.

Comparison table for Azure AI Content Safety, AWS Bedrock Guardrails, Google Model Armor
AttributeAzure AI Content SafetyAWS Bedrock GuardrailsGoogle Model Armor
DescriptionContent moderation with configurable severity scoringConfigurable safeguards for filtering, PII, hallucinationsModel-agnostic safety screening for prompts and responses
Risks coveredToxicity, prompt injection, hallucination, copyrightContent, prompt attacks, PII, hallucination, malicious codePrompt injection, PII, toxicity, malicious URLs
PricingPay-per-text-unit; free tier$0.15/1K text units (85% price cut)Pay-as-you-go
Unique capabilityOn-premises containers + on-device deploymentAutomated Reasoning (formal logic, provable)Apigee + GKE + Security Command Center
IntegrationREST API, Python/C#/Java SDKsAWS SDK, Bedrock API, AgentCoreREST API, Vertex AI, Apigee, GKE
AWS Bedrock Guardrails logo

AWS Bedrock Guardrails

AWS

Vendor

Configurable safety layer for filtering, PII handling, hallucination reduction, and policy controls.

  • Automated Reasoning
  • 85% price cut noted in guide
AWS
Proprietary
No version listedDocs
Azure AI Content Safety logo

Azure AI Content Safety

Microsoft

Vendor

Content moderation and safety service with configurable severity scoring and on-prem options.

  • On-premises containers
  • Configurable severity scoring
Azure
Proprietary
No version listedDocs
Google Model Armor logo

Google Model Armor

Google

Vendor

Model-agnostic prompt and response safety screening layer across Google enterprise infrastructure.

  • Model-agnostic screening
  • Google security integration
GCP
Proprietary
No version listedDocs
Filter tools by type
Filter tools by cloud support

Filtered open source and third-party tools

6 matching non-vendor tools in this filtered view.

Arthur GenAI Engine logo

Arthur GenAI Engine

Arthur

Commercial

Agent governance and discovery platform following Arthur AI's January 2026 GenAI Engine rename.

  • Enterprise governance focus
  • Commercial support
Proprietary
No version listedDocs
Guardrails AI logo

Guardrails AI

Guardrails AI

Open Source

Validation-focused guardrails framework with a large validator ecosystem and freemium model.

  • MIT licensed
  • 50+ validators
6,675MIT
0.10.0Docs
Lakera Guard logo

Lakera Guard

Check Point

Commercial

Commercial LLM security layer integrating into broader cloud security offerings after acquisition.

  • Commercial support
  • Security platform alignment
Proprietary
No version listedDocs
LLM Guard logo

LLM Guard

Palo Alto Networks

Open Source

Open source safety filtering toolkit whose development has slowed following acquisition activity.

  • MIT licensed
  • Free OSS availability
2,823MIT
Maintenance mode — Development slowed significantly post acquisition. Last meaningful commit noted as December 2025 in the guide.
0.3.16Docs
NeMo Guardrails logo

NeMo Guardrails

NVIDIA

Open Source

Open source guardrails framework for LLM and agent safety with a dedicated Colang DSL.

  • Apache 2.0 licensed
  • Active project status
5,977Apache 2.0
0.21.0Docs
Rebuff logo

Rebuff

Palo Alto Networks

Open Source

Prompt injection defense project archived in May 2025 and not recommended for new projects.

  • Apache 2.0 licensed
  • Simple OSS availability
1,459Apache 2.0
Archived — Archived May 16, 2025. Do not recommend for new projects.
0.1.1Docs

Important notes

Warning

Arthur GenAI Engine: Rebranded from Arthur AI to Arthur GenAI Engine and pivoted toward agent discovery and governance.

Warning

LLM Guard: Development slowed significantly post acquisition. Last meaningful commit noted as December 2025 in the guide.

Warning

NeMo Guardrails: Repository moved from NVIDIA/NeMo-Guardrails to NVIDIA-NeMo/Guardrails.

Warning

Rebuff: Archived May 16, 2025. Do not recommend for new projects.

Recent updates

2026-03-12
NeMo Guardrails

NeMo Guardrails' March 12 release introduced IORails for parallel input/output safety execution, plus OpenAI-compatible server support and standalone async rail validation.

Source
2025-09-16
Lakera Guard

Check Point announced an agreement to acquire Lakera, positioning the startup's AI-native protection stack inside a broader end-to-end enterprise AI security portfolio.

Source