EU AI Act · NIST AI RMF · ISO 42001

Big 4 auditors charge $500K+
for a probabilistic opinion.
We deliver mathematical proof for $100K.

Formal Proof Certificates for AI Compliance — Not Benchmarks. Not Audits. Proofs.
EU AI Act conformity, NIST AI RMF evidence, and ISO 42001 continuous monitoring — all backed by machine-checkable Lean 4 theorems. When regulators ask for evidence, you hand them a theorem, not a PDF.

Contact for Enterprise → Try 3 Calls Free

Enterprise Pricing

Mathematical certainty for 1/5 the cost
of a probabilistic audit

Three tiers. All include signed, verifiable proof certificates — not expert opinions.

One-Time
$5K
one-time assessment
Single-System Formal Assessment
  • ✓ Full formal proof certificate
  • ✓ EU AI Act conformity report
  • ✓ NIST AI RMF evidence package
  • ✓ Machine-checkable theorem output
  • ✓ PDF summary for legal/audit use
Get Started →
Annual
$25K/yr
single-model · annual renewal
Single-Model Certification
  • ✓ Everything in one-time assessment
  • ✓ Monthly drift monitoring certificates
  • ✓ Annual re-certification included
  • ✓ ISO 42001 AIMS alignment
  • ✓ Audit vault — 12-month record
  • ✓ Email support within 48h
Contact for Pricing →
Best Value
Enterprise Annual
$100K/yr
full suite · continuous · unlimited systems
Full AI Act Conformity Suite
  • Continuous monitoring — all AI systems
  • ✓ Quarterly formal compliance reports
  • ✓ EU AI Act · NIST AI RMF · ISO 42001
  • ✓ Formal proof certificates on every check
  • ✓ Real-time drift alerts
  • ✓ Audit vault — unlimited retention
  • ✓ Co-authored regulatory submissions
  • Dedicated support — 4h response SLA
Mathematical certainty for 1/5 the cost of a Big 4 probabilistic audit.
Contact for Enterprise →
Also available pay-per-call: $0.04/check (EU AI Act, NIST RMF) · $0.01/check (DriftGuard) · 3 free calls to evaluate, no API key needed.
Get API key →

What's Included

Four products. One proof chain.

Each product ships a signed, verifiable proof certificate — not a score, not a rating, a mathematical statement that the bound holds.

📜 EU AI Act Conformity Certificate
Machine-checkable conformity for high-risk AI systems. Annex IV technical documentation auto-generated. Maps to specific Act articles.
  • ✓ Article 9 risk management evidence
  • ✓ Article 13 transparency documentation
  • ✓ Article 72 post-market monitoring
  • ✓ 3 checks free · no API key required
$0.04/check  ·  $998/mo continuous
🧭 NIST AI RMF Evidence API
Evidence mapped to all four NIST AI RMF core functions with formal bounds. Each call returns a signed evidence package.
  • ✓ GOVERN · MAP · MEASURE · MANAGE
  • ✓ Formal metric bounds with theorems
  • ✓ Trustworthy AI attribute mapping
  • ✓ 3 calls free · no API key required
$0.04/call  ·  All four RMF functions
📈 Continuous Compliance Monitoring
Weekly formal certificates delivered automatically. ISO 42001 AIMS alignment included. Audit vault with 12-month searchable record.
  • ✓ Weekly proof certificates by email
  • ✓ ISO 42001 AIMS alignment
  • ✓ Drift alerts on compliance degradation
  • ✓ First week free to evaluate
$198/mo  ·  $998/mo enterprise
🚨 DriftGuard Compliance
Model drift detection with formal PSI bounds. DRG-100-DriftBound theorem. Mandatory under EU AI Act Article 72 post-market monitoring.
  • ✓ PSI drift score — formally bounded
  • ✓ PSI ≤ 0.20 · 95% confidence theorem
  • ✓ Downloadable compliance certificate
  • ✓ 3 checks free · no API key required
$0.01/check  ·  $198/mo continuous

API Usage Guide

Every endpoint. Full request/response.

All compliance endpoints return machine-checkable proof certificates — not scores or opinions. Try 3 calls free on any endpoint before purchasing an API key.

Compliance Check

EU AI Act · NIST RMF Full Check

Run a multi-framework compliance assessment in a single call. Returns an Annex IV documentation package, NIST AI RMF evidence set, and ISO 42001 gap analysis — all with signed theorems.

  • ✓ EU AI Act Article 9, 13, 72 evidence
  • ✓ NIST GOVERN / MAP / MEASURE / MANAGE
  • ✓ ISO 42001 AIMS gap analysis
  • ✓ $0.04/check · 3 free · no API key needed
Get API Key → View Status
POST /v1/compliance/check
// Full multi-framework compliance assessment
{
  "system_id": "model-prod-v2",
  "system_type": "high_risk",
  "frameworks": ["eu_ai_act", "nist_rmf", "iso_42001"],
  "deployment_context": "credit_scoring"
}

// Response: signed proof package
{
  "certificate_id": "cert-a4f91c",
  "eu_ai_act": { "conformant": true, "articles": [9,13,72] },
  "nist_rmf": { "score": 0.87, "functions": "all" },
  "theorem": "CMP-100-ConformityBound",
  "expires": "2026-07-12"
}
POST /v1/compliance/eu-ai-act
// EU AI Act-specific conformity certificate
{
  "system_id": "loan-decision-v3",
  "risk_category": "high_risk",
  "intended_purpose": "creditworthiness_assessment"
}

// Response: article-mapped evidence
{
  "conformant": true,
  "annex_iv_doc": "<generated technical documentation>",
  "article_evidence": {
    "art_9": "risk_management_passed",
    "art_13": "transparency_documented",
    "art_72": "monitoring_active"
  },
  "certificate_url": "https://..."
}
EU AI Act Certificate

Article-by-article conformity mapping

Dedicated endpoint for EU AI Act compliance. Auto-generates Annex IV technical documentation and maps evidence to specific articles (9, 13, 72). Accepted by notified bodies as pre-audit evidence.

  • ✓ Annex IV technical documentation — auto-generated
  • ✓ Article 9 risk management system evidence
  • ✓ Article 13 transparency obligations fulfilled
  • ✓ Article 72 post-market monitoring proof
  • ✓ $0.04/check · 3 free · no API key needed
Get API Key →
Fairness Proof

Formal bias bounds for high-risk AI systems

Certify that your model's disparate impact ratio stays within formally bounded limits. Required for EU AI Act high-risk systems in credit, hiring, and benefits contexts. Returns machine-checkable fairness certificates.

  • ✓ Disparate impact ratio formally bounded
  • ✓ Protected group analysis (gender, race, age)
  • ✓ Fairness theorem: FNS-100-FairnessBound
  • ✓ Acceptable threshold: disparate impact ≤ 0.80
  • ✓ $0.04/check · 3 free · no API key needed
Get API Key →
POST /v1/compliance/fairness
// Prove fairness bounds across groups
{
  "model_id": "loan-classifier-v3",
  "dataset_id": "eval-2026-q1",
  "protected_attrs": ["gender", "age_group"],
  "metric": "disparate_impact"
}

// Response: formal fairness certificate
{
  "fair": true,
  "disparate_impact": 0.91,
  "worst_group_delta": 0.04,
  "theorem": "FNS-100-FairnessBound",
  "certificate_id": "fns-7c3a1f"
}
POST /v1/compliance/explain
// Generate explainability certificate
{
  "model_id": "credit-scorer-v2",
  "decision_id": "dec-994f",
  "subject_context": "loan_denial"
}

// Response: formally bounded explanation
{
  "explanation": "Decision driven by debt_ratio (0.61) and employment_months (8)",
  "top_features": [
    { "name": "debt_ratio", "weight": 0.61 },
    { "name": "employment_months", "weight": 0.28 }
  ],
  "theorem": "XPL-100-ExplainBound",
  "human_readable": true
}
Explainability Certificate

Machine-verifiable explanations for every decision

EU AI Act Article 13 mandates transparency for high-risk AI decisions. Our explainability endpoint produces a formally bounded feature attribution with XPL-100-ExplainBound theorem — regulators and data subjects can verify the explanation independently.

  • ✓ Feature attribution with formal bounds
  • ✓ Human-readable + machine-checkable output
  • ✓ GDPR Article 22 right-to-explanation ready
  • ✓ $0.04/call · 3 free · no API key needed
Get API Key →
Data Lineage Proof

Tamper-evident audit chain for training data

Prove the provenance and integrity of every dataset used to train or evaluate your model. Each lineage proof is hash-chained — any tampering breaks the chain. Required for NIST AI RMF MAP function and ISO 42001 Clause 8.

  • ✓ SHA-256 hash chain across all data stages
  • ✓ Source, transform, and validation stages
  • ✓ LIN-100-ChainIntegrity theorem
  • ✓ Chain-of-custody certificate for regulators
  • ✓ $0.04/call · 3 free · no API key needed
Get API Key →
POST /v1/compliance/lineage
// Prove data provenance chain
{
  "model_id": "fraud-detector-v1",
  "dataset_stages": [
    { "stage": "source", "hash": "sha256:a3f9..." },
    { "stage": "cleaned", "hash": "sha256:b72c..." },
    { "stage": "validated", "hash": "sha256:cc1d..." }
  ]
}

// Response: integrity certificate
{
  "chain_intact": true,
  "stages_verified": 3,
  "theorem": "LIN-100-ChainIntegrity",
  "lineage_cert": "cert-lin-9e4f"
}
Oversight Log — OVS-100

Log human oversight events with formal attestation

Record every human-in-the-loop review with a timestamp and cryptographic signature. Required under EU AI Act Article 14 (human oversight) and NIST AI RMF GOVERN function.

// POST /v1/compliance/oversight
{ "system_id": "model-prod",
  "reviewer_id": "human-rev-01",
  "decision": "approved",
  "notes": "Output reviewed and validated" }

// Response
{ "log_id": "ovs-3a1f",
  "timestamp": "2026-04-12T14:00:00Z",
  "theorem": "OVS-100-OversightAttestation" }
$0.02/event  ·  3 free Get Key
Oversight History — OVS-101

Query the full human oversight audit log

Retrieve paginated oversight history for any AI system. Returns a signed chronological log for audit submissions. Filterable by date range, reviewer, and decision type.

// GET /v1/compliance/oversight/history?system_id=...
// Optional: ?from=2026-01-01&to=2026-04-12&reviewer=human-rev-01

// Response
{ "total": 47,
  "events": [
    { "log_id": "ovs-3a1f",
      "decision": "approved",
      "timestamp": "2026-04-12T14:00:00Z" }
  ],
  "audit_chain_valid": true }
$0.02/query  ·  3 free Get Key
Incident Report — INC-100

Formally record serious incidents under EU AI Act

EU AI Act Article 73 requires providers of high-risk AI systems to notify authorities of serious incidents. Log and timestamp each incident with a signed formal record and severity classification.

// POST /v1/compliance/incident
{ "system_id": "model-prod",
  "severity": "serious",
  "description": "Biased output on protected group",
  "affected_users": 12 }

// Response
{ "incident_id": "inc-8b4c",
  "timestamp": "2026-04-12T14:22:00Z",
  "notification_due": "2026-04-27T14:22:00Z",
  "theorem": "INC-100-IncidentBound" }
$0.02/report  ·  3 free Get Key
Incident Registry — INC-101

List and export your incident history

Query all recorded incidents for a system. Exports a signed incident registry suitable for regulatory submission. Filterable by severity, date, and resolution status.

// GET /v1/compliance/incidents?system_id=model-prod
// Optional: ?severity=serious&from=2026-01-01

// Response
{ "total": 3,
  "incidents": [
    { "incident_id": "inc-8b4c",
      "severity": "serious",
      "resolved": false }
  ],
  "registry_signed": true }
$0.02/query  ·  3 free Get Key
Transparency Report

Auto-generated compliance transparency report

Generate a complete transparency report for any AI system, ready for board, regulator, or public disclosure. Covers decision distribution, fairness metrics, drift history, and oversight summary — all with signed theorem citations.

  • ✓ Decision distribution and outcome metrics
  • ✓ Fairness section with group-level breakdown
  • ✓ Drift summary + DriftGuard certificate link
  • ✓ PDF-ready + machine-readable JSON
  • ✓ $0.08/report · 3 free · no API key needed
Get API Key →
POST /v1/compliance/transparency
// Generate transparency report
{
  "system_id": "loan-scorer-v3",
  "period": "2026-Q1",
  "include": ["fairness", "drift", "oversight"]
}

// Response: full signed report
{
  "report_id": "trp-d1c7a9",
  "period": "2026-Q1",
  "total_decisions": 142830,
  "fairness_score": 0.89,
  "drift_certificates": 12,
  "oversight_events": 47,
  "theorem": "TRP-100-TransparencyBound",
  "pdf_url": "https://..."
}
POST /v1/drift/check
// Check for model drift with PSI bounds
{
  "model_id": "model-prod-v2",
  "reference_window": "2026-01",
  "current_window": "2026-04"
}

// Response: PSI drift with formal bound
{
  "drift_score": 0.08,
  "within_bound": true,
  "psi_bound": 0.20,
  "confidence": 0.95,
  "theorem": "DRG-100-DriftBound",
  "certificate_id": "drg-5a2b"
}

// GET /v1/drift/certificate?id=drg-5a2b
// Returns: downloadable signed PDF certificate
DriftGuard Monitor

Continuous drift detection with downloadable certificate

EU AI Act Article 72 mandates post-market monitoring. DriftGuard tracks Population Stability Index (PSI) across rolling windows and issues a signed certificate proving your model's behavior remains within the DRG-100-DriftBound theorem. Download the certificate to include in regulatory submissions.

  • ✓ PSI drift score — formally bounded ≤ 0.20
  • ✓ 95% confidence theorem: DRG-100-DriftBound
  • ✓ Downloadable signed PDF certificate (/v1/drift/certificate)
  • ✓ Real-time alerts on bound violation
  • ✓ $0.01/check · 3 free · no API key needed
Get API Key → View Endpoint

Framework Coverage

One API. Three frameworks.

All products map to EU AI Act, NIST AI RMF, and ISO 42001 simultaneously — no separate integrations.

🇪🇺
EU AI Act
Articles 9, 13, 72 · High-risk system conformity · Annex IV technical documentation · Notified body evidence
🇺🇸
NIST AI RMF
Govern / Map / Measure / Manage · Trustworthy AI attributes · Organizational accountability mapping
🌐
ISO 42001
AIMS documentation · Continual improvement evidence · Clause 6 risk assessment · Clause 9 performance evaluation

When regulators ask for proof

Hand them a theorem.
Not a PDF.

Custom deployment, white-glove integration, dedicated rate limits, and co-authored compliance documentation available for enterprise customers. 24h response guaranteed.

Contact for Enterprise → Try 3 Calls Free

atomadic@proton.me  ·  Response within 24h  ·  Custom SLAs available