How it works

Three disciplines. Three surfaces. One evidence chain.

AI Assurance is not a consulting service: it is an architecture. Three engineering disciplines turn regulatory conformity into something executable, and three product surfaces deliver it to the right team at the right moment.

01 · The three disciplines

How we work under the hood

Three ways of working that move regulatory conformity out of a parallel spreadsheet and into the system that produces the outcomes. All three apply at all times — across the SDK, the gateway and the platform.

  1. 01

    GovOps — Governance integrated into every release

    Governance checks run automatically on every model release. A system that fails the regulatory threshold is not deployed — the block is technical and traceable, not a note in a meeting minute. The same pattern as DevOps moving operations into the delivery chain and DevSecOps doing the same with security: the discipline shifts from a parallel committee to the natural product cycle.

    Concrete example

    A model is not deployed if its fairness metric falls below the policy threshold. The delivery pipeline detects it and stops it automatically — same flow as a failing test, no parallel committee required.

  2. 02

    Conformity as Code — Versioned, reviewable controls

    Regulatory duties are expressed as versioned controls with measurable thresholds — the same concept that NIST OSCAL and ISO 42001 Annex A formalise. Each EU AI Act article (Arts. 9 to 15) becomes a concrete control the compliance team can review and approve before every release. Controls are comparable across overlapping frameworks, avoiding duplicate work between EU AI Act, ISO 42001, DORA or MDR.

    Concrete example

    EU AI Act Art. 10 is captured as a measurable threshold (the 80% rule across protected cohorts) in a versioned policy file the compliance team can review and sign off before every release.

  3. 03

    Evidence Engineering — Auditable proof as a natural by-product of the work

    Regulatory evidence is generated automatically on every release: data lineage, model inventory, performance metrics and execution traces. Each artefact is linked to the policy that governed it. The evidence portfolio that used to take weeks to compile by hand stays up to date on every commit and exports to open formats for auditors and regulators.

    Concrete example

    Every release automatically produces: data lineage, model inventory, performance metrics and execution traces. The evidence portfolio that used to take weeks to compile manually stays up to date on every change and exports to open formats for auditors and regulators.

Cross-coverage

One control. N frameworks at once.

Regulatory overlap shows up as a matrix: every versioned control connects to several frameworks through its specific clause. Hover a row to see which frameworks it covers; hover a column to see which controls apply to it. Click any cell to open the official text.

ControlReg. IAISO 42001DORAMDRRGPDNIS2
Data fairness3/6Art. 10A.7.4Art. 5.1.a
Data lineage6/6Art. 12A.7.5Art. 9-11Anexo IIArt. 30Art. 21
Human oversight3/6Art. 14A.9Art. 22.3
Risk management6/6Art. 9A.5Art. 6Anexo IArt. 35Art. 21.2
Technical documentation4/6Art. 11 + Anexo IVA.6.2.4Art. 5Anexo II
Pre-deployment gate4/6Art. 17 + 43A.6.2.5Art. 24-26Anexo VII
Covers the frameworkNo direct anchor

02 · The three surfaces

One platform, three touchpoints.

The disciplines materialise in three surfaces that cover the full lifecycle: from the model's first change to inference in production and the handoff to the compliance team.

  • SDK

    AI engineering teams

    Conformity by Design in code.

    An open-source library that plugs into your training, evaluation and release flow. It captures data lineage, fairness and performance metrics, and inventories models without dictating architecture or locking you into a specific MLOps stack.

  • Gateway

    AI operations

    Every inference is checked against the active policy before returning the answer.

    The gateway applies conformity policies on every decision: blocks those that breach thresholds, escalates to human oversight where required and records signed evidence. Compatible with your current models — you decide what flows through.

  • Platform

    Compliance and leadership

    The control plane that ties everything together and presents it audit-ready.

    System catalogue, policies, aggregated evidence, Compliance Officer panel and report export in open formats for auditors and regulators. No lock-in: your data stays yours, in OSCAL.

Venturalítica platform — ISO/IEC 42001 Compliance panel

03 · The evidence chain

From a code change to the auditor's panel.

Each step signs its output and stays linked to the article or clause it covers. The chain is not rebuilt: it stays alive.

  1. 01

    Capture in code

    The SDK records lineage, metrics and model events at the exact moment they happen — no intermediate step.

  2. 02

    Verification in production

    The gateway compares each inference against the active policy, signs the outcome and writes an audit entry.

  3. 03

    Aggregation in the plane

    The platform joins the facts from SDK and gateway into a single traceable chain, organised by clause and article.

  4. 04

    Audit hand-off

    The report exports as OSCAL 1.1.2 — the NIST canonical format. The auditor receives signed facts, no rewrite.

Git-versioned policy — `assessment-plan` YAML

auditor-ready
assessment-plan:
  metadata:
    title: "Política de scoring de crédito · Alpha Corp"
  reviewed-controls:
    control-implementations:
      - description: "Reglas de equidad y rendimiento"
        implemented-requirements:
          - control-id: gender-disparate-impact
            description: "Equidad de género (Disparate Impact > 0.8)"
            props:
              - name: metric_key
                value: disparate_impact
              - name: operator
                value: gt
              - name: threshold
                value: "0.8"
              - name: severity
                value: block
              - name: "input:dimension"
                value: gender
          - control-id: age-disparate-impact
            description: "Equidad de edad (Disparate Impact > 0.5)"
            props:
              - name: metric_key
                value: disparate_impact
              - name: operator
                value: gt
              - name: threshold
                value: "0.5"
              - name: severity
                value: block
              - name: "input:dimension"
                value: age
          - control-id: accuracy-floor
            description: "Utilidad mínima del modelo (Accuracy > 70%)"
            props:
              - name: metric_key
                value: accuracy_score
              - name: operator
                value: gt
              - name: threshold
                value: "0.7"
              - name: severity
                value: warn

Next step

Want to see how this fits your organisation?

In 3 minutes you know where you stand; in 20 minutes we walk it with you.