AI Readiness Assessment
Evaluates whether your organization has the strategy, data, people, systems, and governance needed to use AI safely and effectively to create real business value.
Executive Self Assesssment
Purpose
This assessment evaluates whether your organization is structurally ready to use AI in a way that creates durable value, rather than prototypes, demos, or compounding technical and operational risk.
The objective is to determine whether AI will strengthen decision-making and execution, or simply accelerate existing weaknesses in strategy, data, systems, and governance.
What this is
A practical readiness check across Strategy, Data, People, Technology, and Risk that highlights blockers and shows where AI can be applied safely and effectively.
It highlights:
- Structural blockers to effective AI use
- Conditions under which AI can be applied safely
- Where restraint is more valuable than acceleration
What this is not
- Not a vendor or model comparison
- Not a tooling recommendation
- Not a roadmap or transformation plan
- Not a promise of automation or headcount reduction
How to Use This Assessment
- Complete the checklist (20–30 minutes)
- Core each section independently
- Identify structural blockers
- Address readiness gaps before building AI features
Do not average scores. Risk concentrates where readiness is weakest.
1. Strategy (Intent and Use Cases)
Check all that apply
☐ AI initiatives are driven by available tools rather than business problems
☐ Use cases are framed as “add AI” instead of solving a specific constraint
☐ No clear business owner is accountable for AI outcomes
☐ Success metrics are technical (accuracy, latency) rather than operational
☐ AI is positioned as inevitable rather than optional
Healthy signals
- Clear problem statements before solution selection
- Named business owner for each AI use case
- Explicit success criteria tied to business outcomes
- Willingness to not use AI when it adds no leverage
Red flag
If AI is treated as a feature rather than a capability, readiness is low.
2. Data (Quality, Access, Structure)
Check all that apply
☐ Data is fragmented across systems without a clear source of truth
☐ Data quality issues are known but unresolved or undocumented
☐ Ownership of critical datasets is unclear
☐ Sensitive or regulated data boundaries are not explicitly defined
☐ Data used for AI differs from data trusted for decisions
Healthy signals
- Known data sources, limits, and dependencies
- Clear ownership and access controls
- Data quality is “good enough” and well understood
- Explicit rules for what data AI can and cannot use
Red flag
If you don’t trust your data, AI will amplify the problem.
3. People (Skills, Trust, Adoption)
Check all that apply
☐ AI expertise is concentrated in one person or team
☐ Teams are either fearful of AI or unrealistically optimistic
☐ No shared guidance on when AI should or should not be used
☐ AI outputs are accepted without human judgment
☐ Accountability for AI-assisted decisions is unclear
Healthy signals
- Shared baseline understanding of AI capabilities and limits
- Clear expectations for human-in-the-loop decision-making
- Teams feel supported rather than displaced
- AI is positioned as decision support, not decision authority
Red flag
If teams either blindly trust or completely reject AI, adoption will fail.
4. Technology (Systems and Integration)
Check all that apply
☐ Core systems are brittle, tightly coupled, or poorly documented
☐ There are no clear integration points for AI capabilities
☐ Manual work dominates critical operational paths
☐ AI behavior is not observable or testable
☐ There is no reliable rollback or containment mechanism
Healthy signals
- Stable systems with clear interfaces
- Identified opportunities for augmentation, not disruption
- Ability to test, monitor, and revert AI-assisted workflows
- Observability of AI outputs and downstream effects
Red flag
If your systems are fragile, AI will increase instability, not efficiency.
5. Risk, Compliance, and Governance
Check all that apply
☐ No formal policy governing AI usage or data handling
☐ Regulatory, legal, or contractual exposure is unclear
☐ AI-assisted decisions are not auditable or explainable
☐ Incident response plans do not account for AI failure modes
☐ Responsibility for AI misuse or error is undefined
Healthy signals
- Clear policies for AI use and data boundaries
- Awareness of regulatory and contractual constraints
- Ability to explain and audit AI-assisted decisions
- Defined response paths for AI-related incidents
Red flag
If you cannot explain or defend AI outputs, you are not ready to deploy them.
AI Readiness Scoring
Score each area from 0 to 2
- 0 = Not ready
- 1 = Partially ready
- 2 = Ready
Record your scores
- Strategy:
- Data:
- People:
- Technology:
- Risk & Governance:
Interpretation
0–4 → Not ready
Foundations are insufficient. AI will increase risk before delivering value.
5–7 → Conditionally ready
Limited pilots are possible, but only with strict scope, oversight, and controls.
8–10 → Ready for production use
AI can be applied selectively to improve outcomes without destabilizing the system.
What to Fix First (80/20 Guidance)
Start with changes that:
- Clarify intent before introducing tools
- Establish trust boundaries around data and decisions
- Define human accountability for AI-assisted outcomes
- Reduce risk before expanding access or scope
Common high-leverage actions:
- Defining one real use case with a named business owner
- Writing a one-page AI usage and data policy
- Making human review explicit for AI outputs
- Starting with decision support, not automation
- Limiting scope before expanding access
- Avoid scaling until failure modes are understood.
Executive Summary (Optional)
Our current AI readiness is strongest in [X] and weakest in [Y]. Without addressing [Y], AI adoption increases risk without delivering meaningful value. Addressing these gaps enables safe, focused experimentation and sets the foundation for sustainable AI use.
Why this matters
AI does not introduce new risks. It magnifies existing ones. Strong systems gain leverage. Weak systems fail faster.
This assessment helps ensure AI improves outcomes instead of accelerating mistakes that are already present.
Next step
Use this as a baseline. Re-run after pilots, policy changes, or major system updates.
AI readiness is not a milestone. It is an ongoing property of the system.