Attested Intelligence
Contact

AI Attestation

Cryptographic verification for AI models, agents, and pipelines. Establish provenance, verify integrity, and prove compliance for AI artifacts.

Provenance vs Attestation vs Signing

Provenance

Tracks origin and history. Answers: Where did this come from? What transformations occurred?

Descriptive

Signing

Cryptographic approval. Answers: Who approved this? When was it signed?

Verification point

Attestation

Signing + provenance + policy + chain. Proves origin, integrity, compliance, and history with verifiable evidence.

Complete verification

What Can Be Attested

Model Weights

Full model files or delta updates with cryptographic binding to training lineage

Training Runs

Hyperparameters, compute resources, dataset references, and training metrics

Datasets

Training data with lineage tracking, filtering criteria, and preprocessing steps

Inference Logs

Model inputs, outputs, and decisions for audit and compliance verification

Agent Pipelines

Multi-model orchestration with policy bindings and execution traces

Software Artifacts

Build outputs, containers, and deployable packages with supply-chain attestation

When to Attest

Training Completion

Seal model weights and training metadata as baseline

Fine-tuning / Alignment

Attest delta changes with reference to base attestation

Pre-deployment

Verify chain integrity before production release

Runtime (periodic)

Generate inference attestations for audit trail

What Evidence Looks Like

Attestation ArtifactView Schema →
{
  "protocol_version": "1.0",
  "artifact_hash": "a7f3c9b2e1d4...(BLAKE2b-256)",
  "signature": "Ed25519 signature...",
  "timestamp": "2025-01-15T10:30:00Z",
  "artifact_type": "MODEL_WEIGHTS",
  "metadata": {
    "name": "compliance-model-v2",
    "version": "2.0.1",
    "issuer": "training-pipeline"
  },
  "policy_ref": "policy-artifact-hash...",
  "chain_prev": "previous-receipt-hash..."
}

Claims Status

FeatureStatusEvidence
Model weight attestationImplementedschema/v1
Training run metadataImplementedschema/v1
Inference log attestationSpecifiedProtocol spec
Dataset provenanceSpecifiedProtocol spec
Agent execution tracesRoadmapPlanned

Frequently Asked Questions

What is AI attestation?

AI attestation is the cryptographic verification of AI artifacts including models, training runs, datasets, and inference logs. It establishes who created an AI artifact, what inputs and processes were used, and provides tamper-evident proof that the artifact hasn't been modified since attestation.

What is the difference between AI provenance and AI attestation?

AI provenance tracks the history and origin of AI artifacts—where models came from, what data trained them, how they were transformed. AI attestation takes provenance further by cryptographically binding this history into verifiable, tamper-evident records. Provenance describes; attestation proves.

What is the difference between attestation and signing?

Signing proves who approved an artifact at a point in time. Attestation includes signing but adds structured metadata about what is being attested, policy compliance status, and linkage to the continuity chain. Attestation is signing with semantic context and verifiable history.

What AI artifacts can be attested?

Attested Intelligence supports attestation of: model weights (full or delta), training datasets and data lineage, training run metadata (hyperparameters, compute used), fine-tuning and RLHF records, inference logs and outputs, and agent execution traces.

When should AI artifacts be attested?

Key attestation points include: after training completion, after fine-tuning or alignment, before deployment to production, at regular intervals during inference (for audit), and after any model modification or update.

What evidence is included in an AI attestation?

AI attestations include: BLAKE2b-256 hash of the artifact, Ed25519 signature, RFC 3339 timestamp, artifact type classification, optional metadata (name, version, issuer), policy artifact reference if applicable, and chain linkage for continuity verification.

How does attestation help with AI regulation compliance?

Emerging AI regulations require transparency about model provenance and training data. Attestation provides the cryptographic evidence needed for compliance—verifiable records that can be audited by regulators, retained for liability purposes, and proven to third parties.

Can attestation prevent model poisoning?

Attestation doesn't prevent poisoning but enables detection. By establishing a verified baseline at training completion, any subsequent modification would break the attestation chain. Combined with supply-chain provenance, it creates defense-in-depth against model tampering.

Related Resources

GlossaryAI ProvenanceComparisonAttestation MethodsTechnicalTechnology OverviewMechanismContinuity Chain