EPM Insider
Issue #003 March 2026

AI in Finance Isn't a Threat. It's a Mirror.

AI doesn't manufacture clarity. It reveals it—or the absence of it.

In every hallway conversation I see a quiet anxiety running through finance teams right now. Finance teams have been reducing headcount since 2020, and with uncertainty brewing in the world, agentic AI comes into play just in time.

I want to offer a different way of looking at what is happening.

I don't think the concern for today should be replacement. I think it should be about data readiness.

The Narrative We're Being Sold

The dominant frame is fear: AI is coming, jobs are disappearing, you look at the list of jobs that AI can possibly replace, just to see if your profession is there. And while the urgency is understandable, the vast majority of companies I worked with are not even close to having good enough data for AI, you can hold your fear for now.

Finance professionals who've spent years building expertise in planning, forecasting, and performance management will not be obsolete. They're needed in a different way than they are used to, before they were building excel, matrixes, formulas... Now subject matter experience matters more than art of doing it. Delivering the best formulas doesn't matter if you don't know why you are doing it.

What AI Actually Reveals

One thing that I noticed past few months using AI agents myself: AI doesn't manufacture clarity. It reveals it (or the absence of it). This applies to your data as well the people using it.

When AI tools are layered onto an organization with strong data foundations, clean taxonomy, unified definitions and consistent ownership, they act as a multiplier of what any team can already do. The analyst who spent three days reconciling a report now gets that time back. The CFO who needed a week to model a scenario gets answers in an afternoon.

I believe this is the vision (delusional) of the silicon valley CEOs.

The common reality is harder. Most EPM implementations I have seen have data silos, Companies and departments that could not agree on definitions, systems that didn't integrate with each other, dimensions that have three different names depending on who you asked.

The technology works. The foundation beneath it wasn't ready. Now AI won't solve these problems from one day to the next.

The Pain Points AI Exposes

I've spent a lot of time inside FP&A environments. And what I've seen, again and again, the AI excitement is there (new kid on the block), but it won't solve your old problems.

The central planning cycle that runs on a web of systems that no one understands. The consolidation that carried forward the whole history of mistakes you did. The reporting package that only one person fully understood how to update. These aren't failures. They're the accumulated result of teams doing their best with the tools and timelines they had.

AI doesn't judge that history. But it does make it visible. Don't be ashamed.

This shame, as uncomfortable as it can feel, is actually an opportunity. Now there's a reason, and an increasing number of tools, to finally address the data foundation questions that always got deprioritized.

What AI Needs to Work

If you're going to work on your data for AI rather than against it, I have collected 5 action points for you:

1. Single Source of Truth for Master Data

One chart of accounts. One entity hierarchy. One product taxonomy. Not "mostly aligned" versions across ERP, planning, and CRM. One.

AI models trained on inconsistent definitions will automate the inconsistency. E.g.: If "EMEA" means different geographies in Salesforce vs. Oracle, the AI will produce outputs that match neither.

It is hard to harmonise years of business development of historical data, but you have to compromise in order to move forward.

2. Governed Hierarchies with Version Control

Your organizational structure will change. Products get added, entities get restructured, cost centers get merged. AI needs to understand not just what the hierarchy looks like today, but what it looked like last quarter when you're comparing results.

Version control isn't optional. Every hierarchy change needs a timestamp, an owner, and documentation. If you can't trace why a number changed between two reporting periods, AI can't either.

3. Consistent Granularity Across Systems

Your reporting data needs to live at the same level of detail across platforms. If your actuals are at account level but planning is at category level, AI can't reconcile them without assumptions. And assumptions embedded in AI models become invisible.

Define your grain once: account, entity, product, time period. Then enforce it everywhere. The teams that skip this step spend months debugging AI outputs that are technically correct but operationally useless.

4. Clean Temporal Alignment

Time isn't as simple as it looks. Fiscal calendars don't match calendar years. If your actuals, forecasts, and pipeline data are on different time bases, the AI will average them, interpolate them, or pick one at random. None of those options end well.

AI needs to know which clock to use. I had problems in the beginning with my AI hallucinating on which day of the week we were. This was solved once I created a calendar table for it.

So build your time dimension table. Map every transaction, forecast, and plan to it. Make it queryable. This is the difference between AI that helps and AI that hallucinates.

5. Metadata That Travels with the Data

Every number needs context. Who created it. When. Under what assumptions. What version of the model. What scenario.

AI doesn't infer intent or background information. If your forecast model produces three scenarios (base, upside, downside) and you load them into the same table without scenario tags, the AI will blend them. You'll get insights that are mathematically correct and strategically meaningless.

Tag everything. Source system, load timestamp, scenario, version, owner. Metadata isn't overhead. It's the only way to make AI outputs auditable.

The Talent Problem

1:12 Data-centric FP&A talent shortage — 1 qualified candidate for every 12 open roles

The shortage of data-centric FP&A talent (1 for every 12 open roles) makes this even more urgent. The teams that invest in getting their data foundation right are the ones that will be able to do more with fewer people.

I don't see this AI wave as a cost-saving measure, but as an opportunity to do more with the same number of people. Every finance team is understaffed at some degree, so AI will not fix that by replacing a headcount (with years of business context). It fixes it by removing the manual grinding that shouldn't exist in the first place. That's the leverage I see at the moment.

Where This Leaves Us

The mirror is only useful when you choose to look into it.

AI will show you where your data foundations are weak. It will expose the workarounds, the tribal department knowledge, the "we've always done it this way" processes that worked when the team was small but don't scale.

That's not a failure. That's information.

The teams that treat this moment as a data foundation project, not an AI project, are the ones that will come out ahead. Because once the foundation is solid, AI becomes a multiplier. Without it, AI is just expensive automation of broken processes.

The moment to build with intention is now. Before the urgency forces you to react.


EPM Insider is a weekly newsletter for finance practitioners navigating EPM, AI, and digital transformation. Subscribe at tellsys.com/newsletter

💌 Get the next issue in your inbox

Every Tuesday morning: EPM insights, consulting stories, and counter-intuitive takes on AI in finance. No fluff. Written by a practitioner, for practitioners.

📤 Worth sharing? Forward this to a colleague dealing with EPM or AI decisions.