top of page

Monday Morning Moves: The 10-Minute AI Inventory (MVAI)


Defensibility Readiness starts with knowing what AI you're actually running.

Cold Open: "The Meeting Nobody Wants"

Zoey walked into Monday's standup with the kind of weekend energy you only get after hate-watching a webinar that was way too correct.

"I'm just saying," she began, laptop open, eyes tired, "if someone asked us today what AI we're running, where it's deployed, and who owns it… we would answer with a vibe and a shrug."

Silence.

Felix didn't even look up. "We should definitely not go looking for problems we don't have."

Michael, the manager, made a face like Zoey had just suggested they audit happiness. "Inventory? That's… a bridge too far. We're not a government agency."

Dwight, already halfway out the door, sighed dramatically. "If we do this, I miss targets. If I miss targets, we miss bonuses. If we miss bonuses, morale collapses. I've seen it."

Zoey clicked to the next slide anyway. It was titled: 'AI Inventory' and it had exactly one bullet point: Stop being surprised in court.

Felix leaned back. "This sounds like compliance cosplay."

Michael tried the gentle redirect. "Couldn't we just… write a policy?"

Zoey stared at them. "A policy is not evidence. A policy is not a list. A policy is not a trail. It's a PDF with aspirations."

Dwight frowned. "I hate PDFs."

Zoey nodded. "Perfect. Then you're going to love this."

She flipped the slide.

'The 10-Minute AI Inventory Drill (MVAI)'

"Ten minutes," she said. "No committees. No platform rebuild. No 'strategy.' We do triage. We find what's running, assign owners, and capture enough to be defensible. That's it."

Felix squinted. "Ten minutes is suspiciously reasonable."

Zoey smiled. "Exactly. Which is why we'll actually do it."

And with that, the meeting finally had something it hadn't had in weeks:

Momentum.

Zoey draws the 4-Signal Sweep on a glass whiteboard while Felix and Michael watch, shifting from skepticism to clarity.

Why MVAI? Because "We Have a Policy" Isn't Evidence

In 2026, the question isn't "do you have AI governance?" It's: can you prove what AI exists in your environment, who owns it, what data it touches, and what safeguards exist?

Organizations don't need a quarter-long program to start. They need a Minimum Viable AI Inventory that surfaces shadow AI, assigns accountability, and creates a trail that can be expanded. The MVAI framework delivers exactly that, AI risk management in ten minutes flat.

Zoey Pressure Test: "If a regulator, underwriter, or plaintiff's attorney asks you tomorrow what AI systems you operate, and you respond with 'let me check', you've already lost. Defensibility isn't a strategy deck. It's receipts."

What "10 Minutes" Actually Means

This is triage, not perfection. The goal is simple:

  • Surface likely AI usage quickly

  • Capture a defensible minimum set of fields

  • Assign owners and next actions for unknowns

No committees. No vendor demos. No "stakeholder alignment workshops." Just signal, capture, action.

Step 1: Define What Counts as "AI" (2 Minutes)

Don't debate semantics. Use a rule.

Count it as AI if it:

Criteria

Examples

Uses ML/LLMs to generate, classify, recommend, rank, or decide

Content generators, recommendation engines, fraud scoring

Enables "AI features" in vendor tools

Copilot-style assistants, summarizers, auto-scoring

Runs agentic workflows that can take actions

Auto-ticket creation, config changes, customer messaging bots

If it makes decisions or takes actions without explicit human instruction each time, it's AI. Tag it.

Step 2: Run the "4-Signal Sweep" (3 Minutes)

Pick 2–4 signals your organization already has access to. This catches the majority of shadow AI quickly.

Signal

Where to Look

What to Search

A , SSO / App Catalog

Okta, Azure AD, SSO app lists

ChatGPT Enterprise, Claude, Gemini, Copilot, Notion AI, Zoom AI, Grammarly, Otter, Jasper

B , Procurement / Expense

AP systems, expense reports (last 60–90 days)

"AI", "LLM", OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, Google Vertex AI

C , Cloud Usage

AWS, Azure, GCP consoles

Enabled AI services, endpoints, API calls

D , Repo / Config

GitHub, GitLab, internal repos

openai, anthropic, bedrock, vertex, llm, prompt, rag, vector

Zoey Pressure Test: "Seriously think about this one and how AI works. If your personal agentic army (mine is on Marblism.com) that you hired to help you with blog posts and social media or to manage your inbox or take meeting notes is in your expense reports but not in your AI inventory, congratulations, you've documented your own shadow AI problem for discovery."

Step 3: Capture the Defensible Minimum Fields (3 Minutes)

Use one form. Google Form, Teams form, Jira intake, doesn't matter. Keep it tight.

MVAI Fields (Copy/Paste Ready)

Field

Purpose

System / use case name

What is it called?

Business owner (name + org)

Who benefits from it?

Technical owner (name + team)

Who maintains it?

Built or bought?

Vendor SaaS / API / internal model

Who's impacted?

Internal / customer-facing / both

What it does

Generate / classify / recommend / rank / decide / agent actions

Data involved

None / internal / PII / PHI / financial / regulated

Status

PoC / pilot / production

Risk tier

Low / Medium / High

Evidence link

Where the ticket/docs will live

This isn't bureaucracy. This is the minimum evidence package that makes your defensibility real.

Step 4: Create One Action for Every "Unknown" (2 Minutes)

For anything missing ownership or unclear data:

  1. Open a ticket: "MVAI Triage , assign owner + confirm data type + confirm decisioning/agent actions"

  2. Set due date: End of week

  3. Route to: The team benefiting from it (or the team that integrated it)

This turns "inventory" into "governance." Every unknown becomes a tracked work item with accountability attached.

The Defensibility Add-On (High Leverage)

Add one checkbox to the MVAI form:

If asked by Legal/Audit/Regulator, can we show: owner + purpose + data type + control/evaluation evidence? (Yes/No)

Anything marked "No" becomes the next governance backlog item. This single question transforms your inventory from a snapshot into a prioritization engine.

For organizations working toward frameworks like the NIST AI RMF, this checkbox bridges the gap between "we have a framework" and "we have evidence."

Two-Tier Scaling (So You Don't Stall)

Tier

When

What You Capture

Tier 0

Today

Identify + owner + purpose + data type + status + risk tier

Tier 1

Next sprint

Model card, eval gates, monitoring, incident playbook

You can't govern what you can't see. MVAI makes it visible fast. Tier 0 is your triage. Tier 1 is your maturity path.

Monday Morning Moves: Your Action List

Copy. Paste. Execute.

  • Run the 4-signal sweep (SSO, procurement, cloud, repos)

  • Capture MVAI fields for each AI system/use case you find

  • Create a ticket for every unknown owner/data type

  • Add the defensibility checkbox and escalate the "No's"

That's it. Ten minutes. Real progress.

APS (Aoife's Post-Script)

One-breath summary: The MVAI is a 10-minute drill that surfaces shadow AI, assigns owners, captures defensible fields, and turns inventory into actionable governance: no committees required.

Engagement hook: What's lurking in your AI stack that nobody owns? Run the 4-signal sweep this week and find out. Drop your shadow AI horror stories in the AI Governance Pros Forum: Aoife has seen enough to know the pattern repeats until teams turn “unknowns” into tickets.

SEO receipt: This post covers AI governance, AI inventory, AI risk management, and defensibility: the four corners of knowing what you're actually running before someone else asks.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page