Prometheno  ·  For clinical leadership
01 — Premise
For clinical leadership

Governance, built in.

Audit logs assembled by hand. Cohort approval cycles restarted from zero. Consent revocations that never propagate. Prometheno’s HAVEN consent registry and hash-chained audit log end all three — automatically, on every researcher query. When IRB asks, the chain is ready.

02 — What hurts

Three pains every research-active service knows.

01

Audit assembled by hand

Every IRB ask, every regulator inquiry starts a multi-week PDF compilation job. Audit logs scattered across EHR, email, manual spreadsheets. The chain isn’t broken — it doesn’t exist as a chain.

02

Every cohort restarts the cycle

IRB amendment. DUA renewal. IT access provisioning. Each new study runs the full bureaucratic gauntlet. The infrastructure that should compose, doesn’t.

03

Consent revocations that don't propagate

Patient changes their mind. The downstream cohort doesn’t know. The published study includes records the patient already withdrew. There is no system-wide enforcement — only a paper trail and good intentions.

03 — What I built

HAVEN consent + audit, built into the Prometheno platform. VERITAS and the published benchmark sit on top — for when AI agents enter the workflow.

Open protocol · the foundation

HAVEN consent + audit

Built into the Prometheno backend, automatic on every query
Programmable, immediately revocable consent. Append-only, hash-chained, Ed25519-signed audit trail. Every researcher access is consent-scoped at query time and logged tamper-evidently. The chain you hand to IRB exists by default.
Spec: v2.0 draft, CC BY 4.0
Citation: DOI 10.5281/zenodo.18701303
Tests: 27 passing in Prometheno backend
Solves: Audit assembly, consent revocation propagation, IRB-readable provenance
Reference platform

Prometheno

Where HAVEN runs · where Forge runs
PostgreSQL + OMOP CDM 6.0 backbone. HAVEN reference implementation built in. Forge (the researcher cohort builder) sits on top, every query passing through HAVEN consent + audit. One chain, end to end.
Backend: v0.8.0
HAVEN tests: 27 passing
Vocab: 270K Athena concepts
Solves: Single platform for cohort building + governance, no manual coordination across systems
Bonus · for AI agent governance

VERITAS

Deterministic runtime for AI in regulated environments
When AI agents enter your clinical workflow, VERITAS wraps them: policy-checked, capability-gated, output-verified, hash-chain audited. 7 healthcare reference scenarios shipped (drug interaction, prior auth, sepsis drift detection, etc). Optional. Use only if you’re deploying AI agents.
Tests: 131 passing
Stack: Rust 1.85+, Apache 2.0
Scenarios: 7 healthcare reference
Solves: AI agent governance, policy enforcement, model drift detection
Bonus · published evidence

VeritasBench

700-scenario benchmark · the first published AI agent governance measurement
Real GPT-4o-mini API calls across 5 governance dimensions. Bare LLM scores 81% policy but 0% traceability / 0% controllability — measured, validated by multi-model consensus, published with DOI. Run it against your own AI system to know your gap.
Scenarios: 700, 11 types
Citation: DOI 10.5281/zenodo.19403623
License: Apache 2.0
Solves: Quantifying the AI governance gap before you deploy
04 — Demo

How HAVEN runs underneath every query.

A researcher requests a cohort through Forge. HAVEN scope-checks consent, audits every access, and stays out of the researcher’s way. The chain you hand to IRB is built automatically — not compiled after the fact.

Step 01

Request

Researcher queries Forge for a diabetes cohort.

POST /api/cohorts/build
{
  "criteria": {
    "condition": "type 2 diabetes",
    "age_range": [40, 65],
    "med": "metformin"
  },
  "purpose": "outcome_study"
}
Step 02

Consent check

HAVEN registry filters to consented patients only.

HAVEN.verify_consent(
  patients=[...],
  scope="clinical_data",
  purpose="outcome_study"
)

→ 1,847 / 2,300 patients
   pass scope check
   (453 excluded: scope mismatch
    or revoked)
Step 03

Audit entry

Hash-chained record per access. Tamper-evident.

AUDIT entry #5247
  action: cohort_build
  scope: clinical_data
  purpose: outcome_study
  prev_hash: 0x57238bf3...
  this_hash: 0x9799d065...
  signed: ed25519:fa9c...
Step 04

IRB-ready

Replay the chain on demand. No compilation needed.

$ haven audit export
  --scope cohort_5247
  --format irb-pdf

→ chain verified ✓
→ 1,847 access entries
→ Merkle inclusion proofs
→ output: cohort_5247.pdf
— Bonus · if you deploy AI agents

AI agents introduce a different gap. Measured.

VeritasBench tested 5 approaches across 700 scenarios. Bare LLM scores 81% policy but 0% traceability and 0% controllability. VERITAS closes the gap. Use only when AI agents enter your workflow.

VeritasBench benchmark chart — bare LLM scores 0% on traceability and controllability while VERITAS-style architecture scores 92% and 90%
Source: VeritasBench v1 — 700 scenarios, GPT-4o-mini, multi-model consensus validated. Bonus tool, not required for HAVEN-native governance.
— About these examples

Code shown is illustrative of the API shape. Hash values are produced by the actual HAVEN reference implementation; specific values vary per scenario. ClinicClaw’s 91% in the chart reflects rules designed with knowledge of the scenario types.

05 — Where this scales

When this works at scale.

Audit assembly stops being a multi-week job. IRB cycles compress because the chain already exists. Consent revocations propagate system-wide, not by hand. Your service generates research output without generating compliance debt.

We sell enterprise subscriptions — per service line, per institution. HAVEN protocol stays open. Prometheno is the platform that runs it. VERITAS sits on top for the AI agent tier when you’re ready.