Practitioner-led advisory board defining the gold standard for securing enterprise AI tooling.
Enterprises are deploying AI faster than security standards are evolving. AISECA is closing that gap with a practical, vendor-agnostic framework built by practitioners.
AISECA develops the AI Security Maturity Framework -- a three-tier model mapping practical security controls to NIST GenAI Risk Domains:
| Tier | Name | Focus |
|---|---|---|
| 01 | Define & Constrain | Foundational governance -- asset inventory, acceptable use policies, access controls, risk assessments |
| 02 | Enforce & Monitor | Operational enforcement -- real-time monitoring, compliance checks, prompt injection defence, DLP |
| 03 | Validate & Adapt | Continuous validation -- red-teaming, adversarial testing, feedback loops, adaptive refinement |
| Repository | Description |
|---|---|
framework |
AI Security Maturity Framework specification |
charter |
Board charter, governance model, and membership guidelines |
controls-catalog |
Detailed catalogue of security controls across all three tiers |
| Period | Milestone | Status |
|---|---|---|
| Jan -- Feb 2026 | Board Charter & Scope | Complete |
| Feb -- Mar 2026 | Framework Design (Draft 1) | In Progress |
| Apr -- May 2026 | Validation & Refinement (Draft 2) | Upcoming |
| May -- Jun 2026 | Sign Off & Public Release (MVP 1.0) | Upcoming |
We are building this in the open. If you work in AI security, governance, or enterprise risk -- we want your input.
- Website: aiseca.org
- Join the Alliance: Apply on our website
- Discussions: GitHub Discussions