[By Prasad Prabhakaran]
Why your AI org, might be set up to fail — and how to fix it before it’s too late
Let me start with a truth bomb:
AI org design isn’t about buying a model or hiring a few data scientists.
It’s about architecting a way of working — across people, platforms, and policies.
And no, this isn’t an HR exercise.
It’s a systems design exercise with cultural implications.
Real talk from financial services frontlines
I was in a meeting at a tier-one bank.
The CRO asked:
“Can we use GenAI to summarise credit memos and draft sighting papers?”
The Head of Engineering said:
“Technically, yes.”
The Ops Director asked:
“Who’ll review the summaries? Who signs them off? Where’s the audit trail?”
The Head of Risk replied:
“We can’t use it if we don’t have model governance.”
Everyone was right.
But no one was aligned.
That’s the problem.
Why AI transformation often breaks down
Banks and insurers love the PoC treadmill.
- Build a chatbot for ops.
- Run a GenAI pilot for underwriters.
- Create an AI copilot for client onboarding.
Then… silence.
Why?
Because they’ve “done AI” without “being with AI.”
AI org design Is about tension resolution
In financial services, three friction points show up fast and loud:
Product ↔ Engineering
Friction:
“AI could automate our KYC checks!”
vs.
“That needs real-time OCR, entity resolution, and scoring models tuned for false negatives.”
Outcome:
If viability and feasibility aren’t aligned, you’ll get compliance risk or delivery failure.
Tech ↔ Ops
Friction:
Model works well in test.
But no one owns it in production.
Audit logs? None. Retraining triggers? Undefined.
Example:
A retail bank deployed a GenAI assistant for customer queries — then paused it after it “hallucinated” responses on complaints.
Lesson:
If you don’t operationalize AI, it backfires fast — especially when regulators are watching.
Talent ↔ Business
Friction:
The risk team asks for interpretable AI.
The ML team says: “That’ll slow performance.”
The CIO hires more GenAI engineers — but forgets to train existing staff.
Result:
Burnout, bottlenecks, and broken trust.
Without AI literacy, even great models fail
AI isn’t just a tech upgrade.
It’s a thinking upgrade.
But here’s the problem:
- Execs think AI is a cost-saving lever.
- Developers think it’s another tool in the stack.
- Ops teams fear it’ll increase errors or audit risks.
- Business users don’t know how or when to use it.
This literacy gap breaks transformation before it even begins.
What AI Literacy actually means in FS
Let’s demystify it. AI literacy doesn’t mean teaching everyone to code.
It means making people:
- Aware of where and how AI is used
- Critical of AI outputs (not blindly trusting)
- Accountable for outcomes influenced by AI
In financial services, that means:
- Relationship managers trusting but validating GenAI-generated summaries.
- Compliance teams understanding how AI decisions are traced and explained.
- Credit teams reviewing AI scoring with contextual judgement — not over-reliance.
Maturity levels (Gartner-inspired, FS edition)
| Level | Description | FS example |
| 0 – None | No AI awareness | “I thought AI was a chatbot.” |
| 1 – Basic | Understands AI outputs | “This tool gives me a risk score.” |
| 2 – Intermediate | Applies AI in context | “I validate AI summaries before sending to clients.” |
| 3 – Strong | Leads or governs AI use | “We adjust our lending models weekly using monitored drift data.” |
Why this matters now
FS leaders are facing massive change:
- EU AI Act compliance deadlines
- Cost pressures driving AI automation
- Hybrid workforce adoption of AI tools (like Copilot, Claude, etc.)
- Increased expectations for explainable AI from FCA, PRA, ECB
The shift isn’t just about models.
It’s about trustworthy transformation at scale.
A 4-Phase plan to build AI literacy & culture in financial services
Phase 1: Awareness & mapping
- Run org-wide pulse surveys on AI understanding
- Identify AI touchpoints: Who uses what, where, and how?
- Map existing gaps in knowledge, trust, and decision ownership
Tools: GenAI observability dashboards, anonymous feedback, use case inventory
Phase 2: Role-based AI enablement
Train by function and by depth:
| Role | Literacy focus |
| Execs | Strategy, value, risk, governance |
| Risk & compliance | Explainability, audit trails, fairness, legal exposure |
| Tech | Model lifecycle, scaling, tooling |
| Ops | Workflow integration, flagging failure, real-world usage |
| Business | Interpreting output, escalation paths, trust boundaries |
Don’t boil the ocean. Focus on active AI users and frontline pilots first.
Phase 3: Embed governance + feedback
- Introduce “AI usage guidelines” and lightweight policy playbooks
- Mandate human-in-the-loop for high-risk processes
- Log usage, flag hallucinations, create escalation routes
- Let people challenge AI — without fear
Make responsible AI everyone’s job, not just the AI team’s.
Phase 4: Normalize AI in the culture
- Run monthly “AI Show & Tell” with real users
- Share stories of success, failure, learnings
- Reward curiosity: micro-learning, internal AI champions
- Use AI to improve work, not replace it
Example: At one FS client, a monthly “AI Wins & WTFs” internal session helped demystify tools and boosted trust across underwriting and ops.
From use cases to usefulness
You’re not just launching AI pilots.
You’re re-architecting how your people:
- Make decisions
- Engage with technology
- Navigate risk
- Create value
And here’s the truth:
AI maturity isn’t about the number of models.
It’s about the confidence and clarity with which your people use them.
Let’s hear it — where is your team on the AI literacy spectrum?
Have you had your own “11 use cases, zero value” moment?
Let’s open the dialogue.