Algorithmic Responsibility Record (ARR)
Pre-incident accountability for algorithmic decision systems
The Algorithmic Responsibility Record (ARR) is a structured, pre-incident record that documents who accepted responsibility for an algorithmic decision, under what constraints, and with what residual risk.
When something goes wrong, institutions are judged on what they recorded before deployment — not on what they can explain afterward.
The problem ARR addresses
Most AI governance efforts focus on model performance, post-hoc explanation, ethical principles, or compliance checklists. These approaches routinely fail under regulatory, legal, or investigative scrutiny because they do not answer the first questions asked after an incident:
- Who decided this system could be used?
- What was the system explicitly allowed to do?
- Who could override it?
- What risks were knowingly accepted?
- Where was responsibility formally recorded?
In many organisations, the answer is simple: it wasn’t. ARR makes that absence visible.
What an ARR records
- Decision identity
- System provenance (model, version, dependencies)
- Decision boundaries (permitted and prohibited actions)
- Human oversight and escalation paths
- Explicit risk acceptance, including residual risk
- Override authority and conditions
- Formal attestation of accountability
Current status
ARR Practitioner Standard v0.1 is published as a public working draft. Early paid pilot engagements are underway with organisations operating under regulatory pressure.
ARR is intentionally narrow. Its power comes from restraint.
Stewardship
ARR is developed and stewarded by Nojan Jadidi.
I work on how organisations record, assign, and survive responsibility for algorithmic decisions — before something goes wrong.