Skip to main content

AI Readiness Scan

The AI Readiness Scan detector evaluates your application's posture for safe and responsible AI integration. As AI-powered features become standard in software products, new categories of risk emerge — from data quality and model governance to prompt injection and regulatory compliance.

The AI Readiness Scan helps you understand these risks before they become incidents.

What it checks

AI Security

CheckDescription
Prompt injection exposureDetects endpoints that may be vulnerable to prompt injection attacks
Model output validationChecks whether AI outputs are validated before being used in critical paths
Sensitive data in promptsDetects patterns where PII or secrets may be passed to AI models
API key exposureDetects exposed AI provider keys (OpenAI, Anthropic, Cohere, etc.)

AI Governance

CheckDescription
Human-in-the-loop controlsEvaluates whether high-stakes AI decisions have human review mechanisms
Model versioningChecks whether model versions are pinned and tracked
Audit loggingEvaluates whether AI interactions are logged for audit and debugging
Fallback mechanismsChecks whether AI failures degrade gracefully

Data Quality & Privacy

CheckDescription
Training data documentationChecks for documentation of training data sources and lineage
PII handlingEvaluates whether PII is properly handled in AI pipelines
Data retention policiesReviews data lifecycle practices in AI contexts
Consent signalsChecks for user consent mechanisms for AI-based personalization

Regulatory Readiness

CheckDescription
EU AI Act alignmentIdentifies high-risk AI use cases per EU AI Act categories
GDPR / data protectionReviews AI data processing against GDPR Article 22 (automated decisions)
Transparency disclosuresChecks for user-facing AI disclosures and explanations
Model card documentationChecks for model cards on internally deployed models

Scan targets

The AI Readiness Scan works on both URL and repository targets:

  • URL: Probes running application for AI-related security issues
  • Repository: Analyzes code for governance, data handling, and security practices

Configuration

{
"detectors": ["ai_readiness"],
"detector_options": {
"ai_readiness": {
"check_security": true,
"check_governance": true,
"check_data_quality": true,
"check_regulatory": true,
"regulatory_frameworks": ["eu_ai_act", "gdpr"],
"ai_providers": ["openai", "anthropic", "google"]
}
}
}
OptionTypeDefaultDescription
check_securitybooleantrueRun AI security checks
check_governancebooleantrueRun AI governance checks
check_data_qualitybooleantrueRun data quality and privacy checks
check_regulatorybooleantrueRun regulatory readiness checks
regulatory_frameworksstring[]AllFrameworks to check against
ai_providersstring[]Auto-detectAI providers in use (for key detection)

Findings

Example finding

{
"id": "finding_ai01",
"detector": "ai_readiness",
"severity": "high",
"rule_id": "ai-sec-prompt-injection",
"title": "Potential prompt injection vulnerability",
"description": "Endpoint POST /api/chat passes raw user input directly into an AI model prompt without sanitization or guardrails.",
"location": {
"type": "code",
"file": "src/api/chat.ts",
"line": 47
},
"remediation": "Sanitize user input before including it in prompts. Implement input validation and consider using a prompt injection detection library. Never trust user-controlled content as instructions to your AI model.",
"references": [
"https://owasp.org/www-project-top-10-for-large-language-model-applications/",
"https://github.com/OWASP/www-project-llm-ai-security"
]
}

Regulatory finding example

{
"id": "finding_ai02",
"detector": "ai_readiness",
"severity": "medium",
"rule_id": "eu-ai-act-transparency",
"title": "Missing AI transparency disclosure",
"description": "Application uses AI-generated content in user-facing output but no disclosure to users was detected.",
"location": {
"type": "url",
"url": "https://myapp.example.com/recommendations"
},
"remediation": "Add a clear disclosure to users that recommendations are AI-generated, per EU AI Act Article 50 transparency requirements.",
"references": [
"https://artificialintelligenceact.eu/article/50/"
]
}

Risk context

AI risk is an emerging field. The AI Readiness Scan provides:

  • Current best practice checks based on OWASP LLM Top 10, NIST AI RMF, and EU AI Act
  • Informational findings for areas that require human judgment
  • Roadmap checks that will become stricter as regulatory guidance matures
info

The AI Readiness Scan is particularly valuable if you are:

  • Integrating LLMs (GPT-4, Claude, Gemini) into your product
  • Building AI agents or autonomous workflows
  • Processing user data through AI models
  • Subject to EU AI Act requirements

OWASP LLM Top 10 coverage

OWASP LLMCheck
LLM01 Prompt Injection
LLM02 Insecure Output Handling
LLM03 Training Data Poisoning🔜
LLM04 Model Denial of Service
LLM05 Supply Chain Vulnerabilities
LLM06 Sensitive Information Disclosure
LLM07 Insecure Plugin Design
LLM08 Excessive Agency
LLM09 Overreliance
LLM10 Model Theft🔜