AI Readiness Scan
The AI Readiness Scan detector evaluates your application's posture for safe and responsible AI integration. As AI-powered features become standard in software products, new categories of risk emerge — from data quality and model governance to prompt injection and regulatory compliance.
The AI Readiness Scan helps you understand these risks before they become incidents.
What it checks
AI Security
| Check | Description |
|---|---|
| Prompt injection exposure | Detects endpoints that may be vulnerable to prompt injection attacks |
| Model output validation | Checks whether AI outputs are validated before being used in critical paths |
| Sensitive data in prompts | Detects patterns where PII or secrets may be passed to AI models |
| API key exposure | Detects exposed AI provider keys (OpenAI, Anthropic, Cohere, etc.) |
AI Governance
| Check | Description |
|---|---|
| Human-in-the-loop controls | Evaluates whether high-stakes AI decisions have human review mechanisms |
| Model versioning | Checks whether model versions are pinned and tracked |
| Audit logging | Evaluates whether AI interactions are logged for audit and debugging |
| Fallback mechanisms | Checks whether AI failures degrade gracefully |
Data Quality & Privacy
| Check | Description |
|---|---|
| Training data documentation | Checks for documentation of training data sources and lineage |
| PII handling | Evaluates whether PII is properly handled in AI pipelines |
| Data retention policies | Reviews data lifecycle practices in AI contexts |
| Consent signals | Checks for user consent mechanisms for AI-based personalization |
Regulatory Readiness
| Check | Description |
|---|---|
| EU AI Act alignment | Identifies high-risk AI use cases per EU AI Act categories |
| GDPR / data protection | Reviews AI data processing against GDPR Article 22 (automated decisions) |
| Transparency disclosures | Checks for user-facing AI disclosures and explanations |
| Model card documentation | Checks for model cards on internally deployed models |
Scan targets
The AI Readiness Scan works on both URL and repository targets:
- URL: Probes running application for AI-related security issues
- Repository: Analyzes code for governance, data handling, and security practices
Configuration
{
"detectors": ["ai_readiness"],
"detector_options": {
"ai_readiness": {
"check_security": true,
"check_governance": true,
"check_data_quality": true,
"check_regulatory": true,
"regulatory_frameworks": ["eu_ai_act", "gdpr"],
"ai_providers": ["openai", "anthropic", "google"]
}
}
}
| Option | Type | Default | Description |
|---|---|---|---|
check_security | boolean | true | Run AI security checks |
check_governance | boolean | true | Run AI governance checks |
check_data_quality | boolean | true | Run data quality and privacy checks |
check_regulatory | boolean | true | Run regulatory readiness checks |
regulatory_frameworks | string[] | All | Frameworks to check against |
ai_providers | string[] | Auto-detect | AI providers in use (for key detection) |
Findings
Example finding
{
"id": "finding_ai01",
"detector": "ai_readiness",
"severity": "high",
"rule_id": "ai-sec-prompt-injection",
"title": "Potential prompt injection vulnerability",
"description": "Endpoint POST /api/chat passes raw user input directly into an AI model prompt without sanitization or guardrails.",
"location": {
"type": "code",
"file": "src/api/chat.ts",
"line": 47
},
"remediation": "Sanitize user input before including it in prompts. Implement input validation and consider using a prompt injection detection library. Never trust user-controlled content as instructions to your AI model.",
"references": [
"https://owasp.org/www-project-top-10-for-large-language-model-applications/",
"https://github.com/OWASP/www-project-llm-ai-security"
]
}
Regulatory finding example
{
"id": "finding_ai02",
"detector": "ai_readiness",
"severity": "medium",
"rule_id": "eu-ai-act-transparency",
"title": "Missing AI transparency disclosure",
"description": "Application uses AI-generated content in user-facing output but no disclosure to users was detected.",
"location": {
"type": "url",
"url": "https://myapp.example.com/recommendations"
},
"remediation": "Add a clear disclosure to users that recommendations are AI-generated, per EU AI Act Article 50 transparency requirements.",
"references": [
"https://artificialintelligenceact.eu/article/50/"
]
}
Risk context
AI risk is an emerging field. The AI Readiness Scan provides:
- Current best practice checks based on OWASP LLM Top 10, NIST AI RMF, and EU AI Act
- Informational findings for areas that require human judgment
- Roadmap checks that will become stricter as regulatory guidance matures
info
The AI Readiness Scan is particularly valuable if you are:
- Integrating LLMs (GPT-4, Claude, Gemini) into your product
- Building AI agents or autonomous workflows
- Processing user data through AI models
- Subject to EU AI Act requirements
OWASP LLM Top 10 coverage
| OWASP LLM | Check |
|---|---|
| LLM01 Prompt Injection | ✅ |
| LLM02 Insecure Output Handling | ✅ |
| LLM03 Training Data Poisoning | 🔜 |
| LLM04 Model Denial of Service | ✅ |
| LLM05 Supply Chain Vulnerabilities | ✅ |
| LLM06 Sensitive Information Disclosure | ✅ |
| LLM07 Insecure Plugin Design | ✅ |
| LLM08 Excessive Agency | ✅ |
| LLM09 Overreliance | ✅ |
| LLM10 Model Theft | 🔜 |