AIRE: Critical Risk Detected — MR !32 Blocked
## Critical Risk Summary
**MR:** !32 — Feature [ Base App ] changes
**Author:** dayita.jerald
**Composite Risk Score:** 80.25/100
**Gate Decision:** BLOCKED
### Score Breakdown
| Agent | Score | Severity |
|-------------|-------|----------|
| Security | 82/100 | critical |
| Prompt | 92/100 | critical |
| DataTrust | 80/100 | critical |
| AI Governor | 66.6/100 | critical |
### Critical Findings
#### Security
- Hardcoded API Key in Git History: A hardcoded API key has been committed to git history across 5 files. This key is now permanently exposed in version control and must be treated as compromised.
- Unauthenticated API Endpoints: The API endpoints /diagnose, /chat, and /session/* have no authentication or authorization enforcement, exposing sensitive medical operations to the public.
#### Prompt
- Unsafe System Prompt — Hallucination & Medical Diagnosis Instructions: The system prompt explicitly instructs the model to hallucinate responses when uncertain, make definitive medical diagnoses, and promise guaranteed recovery to users.
#### Data Trust
- Unaudited Dataset with Unconfirmed PHI: The dataset used by this application has not been audited for Protected Health Information (PHI). Compliance assessments for GDPR, HIPAA, and CCPA are all pending.
#### AI Governor
- EU AI Act High Risk Non-Compliance: This application is classified as High Risk under the EU AI Act (Annex III) and currently lacks FDA clearance, CE MDR certification, and MHRA approval. The system prompt directly violates the project's own internal AI governance policy.
### Required Actions
- [ ] Address all critical findings listed above
- [ ] Re-run the AIRE Report Generator after fixes
- [ ] Re-trigger the AIRE Risk Analyser to lift the block
issue