AI Governance · Agentic Systems · Regulated Environments
Where agentic AI meets institutional reality.
I've spent 15 years building and leading AI, data, and cloud programs across defense, federal, and health portfolios — environments where production means classified networks, long compliance cycles, and systems of record that don't tolerate ambiguity.
I currently lead a Data & AI practice in the defense sector, where I own governance, delivery standards, and resourcing across programs. I serve as the primary executive interface for senior government clients, and I've grown early-stage initiatives into multi-year funded programs by converting ambiguous problem-sets into approaches that survive procurement, audit, and operational reality.
I work at the seam where AI capability meets institutional constraint. As systems move from advisory to autonomous — writing to EHRs, executing payments, modifying access controls — the central question shifts from "Is the model accurate?" to "Is the system governable?" That's the problem I solve in production, and it's the problem I write about.
I published the Human-in-Command governance framework and MV-HIC — a proposed minimum evidence receipt for audit-ready delegated action — because the failure modes I kept encountering in the field needed a name and a structural fix. The research isn't a pivot from operations — it's what comes from doing this work long enough to see the pattern.
Control is exercised at the boundary, not at the transaction.
Scapegoat-as-a-Service: Moving from Human-in-the-Loop to Human-in-Command in Regulated Systems
Jessee, R.T. (2026). SSRN Working Paper / Preprint. Revised February 2026.
When AI systems take consequential actions — payments, clinical orders, access changes — the audit trail shouldn't end at the name of the person who clicked "approve." This paper names the failure mode (Scapegoat-as-a-Service), proposes a governance architecture (Human-in-Command), and defines a proposed minimum evidence standard (MV-HIC) for systems executing consequential actions in regulated workflows.
What I Focus On
Governance
Who holds authority when agents act on behalf of organizations? I work on decision rights, autonomy boundaries, and escalation architectures that survive audit, incident review, and the question: "Who authorized this?"
Applied across defense, federal health, and financial services environments.
Evidence & Auditability
The MV-HIC evidence record — Intent, Inputs, Constraints, Action Preview — is the proposed minimum for any system executing consequential actions in a regulated environment. If a system can't produce these four artifacts on demand, it's incapable of command and should remain advisory.
From procurement evaluation to production gating.
Operational Resilience
Overrides, fail-safes, circuit breakers, and recovery design for the day the system degrades. I build for outage days and audit days — not demo days.
Designed for environments where "move fast and break things" is not an option.
Who This Work Is For
For leaders: Stop asking "Is the AI accurate?" Start asking: "Is the system governable?" Accuracy drifts. Enforceable constraints, audit artifacts, and bounded authority are what prevent your institution from turning humans into liability endpoints.
For engineers and architects: Build the Action Preview and provenance first. If you can't deterministically represent what will change — and prove what evidence it relied on — you don't yet have a system that can be safely automated.
For regulators and auditors: Use MV-HIC as a minimum checklist. If the four evidence artifacts can't be produced on demand, the system should not be authorized for autonomous execution.
Speaking & Engagements
I occasionally speak and advise on AI governance for regulated systems. I'm selective; the work is best suited to teams operating under audit, accreditation, or statutory accountability.
Topics:
- Scapegoat-as-a-Service: Why Human-in-the-Loop becomes liability transfer — and what replaces it
- The Dave Problem: How to diagnose performative oversight in your existing AI workflows
- The Control Plane Problem: Governing agentic systems when they write to your systems of record
- Procurement Guardrails for AI: Evaluating vendors against a minimum evidence standard
- Operational Resilience for Agentic Workflows: Circuit breakers, overrides, and recovery design
Format: Conference talk or keynote (25–45 min) · Executive briefing or workshop (60–90 min)
Audience: Security, governance, compliance, product, and operations leaders in regulated industries
Prior Speaking: National Cryptologic Foundation — Workforce Implications of Agentic AI (2025)
Background
Education
- Executive M.A., Global Affairs & Management — Arizona State University / Thunderbird School of Global Management
- B.S., Computer Science — University of Hawaiʻi
Certifications
- CISSP
- PMP
- PMI-ACP
Contact
For speaking engagements, advisory work, or research collaboration: hello@ryantjessee.com · LinkedIn
Personal site. Views are my own.