Skip to content

Blog

What 12 Years at BMW Taught Me About Process Automation

|8 min read

The Moment It Clicked

There was a meeting — one of hundreds, in a conference room that looked like every other conference room — where a project team was presenting an automation rollout. The slides showed throughput numbers, error rates, and a timeline that ended with “full automation” in green. Everyone nodded.

Three months later, the team that was supposed to be freed up by this automation was working overtime. Not because the technology failed. It worked exactly as designed. The problem was that it was designed for a process that did not exist.

The documented process had twelve steps. The real process — the one that actual humans followed with actual data from actual suppliers — had closer to forty, depending on who you asked and which edge case they had last encountered. The automation handled the twelve steps flawlessly. The other twenty-eight still needed people.

That was not the first time I saw this pattern. But it was the time I started thinking seriously about why it kept happening.

What Enterprise IT at Scale Actually Looks Like

Twelve years in IT at BMW Group was not glamorous. I say that not to be modest but because there is a widespread misconception about what enterprise IT involves — especially in automotive, where people assume the interesting part is the car and everything else is back office.

The reality: enterprise IT at scale is about reliability. It is about ensuring that thousands of processes run correctly every day across hundreds of systems, dozens of departments, and multiple countries. When it works, nobody notices. When it breaks, everyone does.

Most of my time was spent not on building new things but on understanding why existing things worked the way they did. Why does this report take three days to generate? Because it pulls data from six systems, two of which update on different schedules, and one of which requires a manual export because the API was never built. Why does this approval workflow route through four people? Because ten years ago, one of those people had a different role, and nobody updated the process when they changed positions.

This is the texture of enterprise IT. Layers of decisions made for good reasons that no longer apply, held together by people who learned the workarounds and passed them on.

The Product Owner Perspective

For the later part of my time at BMW, I worked as a Product Owner — the role that sits between business needs and technical implementation. This turned out to be the most valuable position for understanding why automation projects succeed or fail.

As a Product Owner, my job was to translate. Business stakeholders would describe what they needed: “We need this process to be faster.” Technical teams would describe what they could build: “We can automate these steps.” My job was to figure out whether those two statements actually pointed at the same thing.

They often did not.

The business stakeholders described the process as it was supposed to work. The technical team built automation for the process as described. And the people who actually did the work had a different process entirely — one shaped by years of dealing with the gap between how things were designed and how things actually happened.

The single most valuable question I learned to ask was: “What happens when something goes wrong?”

Not “what should happen” — that was in the documentation. But what actually happens. When a supplier sends the wrong quantity. When the system is down on a Friday afternoon. When a new regulation takes effect and nobody has updated the handbook yet.

The answers to these questions revealed the real process. And the real process was always more complex, more adaptive, and more dependent on human judgment than anything in the documentation.

Three Patterns I Saw Repeatedly

Over twelve years, certain failure modes appeared again and again. They were not unique to BMW — I have since seen them everywhere.

Pattern 1: The Clean Data Illusion

Every automation project starts with a proof of concept. The proof of concept uses test data. Test data is clean.

Real data is not clean. Supplier names are misspelled. Dates are in different formats. Fields that should be mandatory are empty because someone found a workaround five years ago. Reference numbers that should be unique are not because two systems generate them independently.

The automation that works perfectly on test data breaks within days on real data. Not because the logic is wrong, but because the assumptions about the data were wrong.

In our experience, data quality issues account for more failed automation projects than any technical limitation. The models are capable. The data is the problem.

Pattern 2: The Forty-Seven Undocumented Exceptions

This is the most persistent pattern. A process looks simple from the outside. Three steps: receive, process, route. But the person who has been doing this for eight years knows that “process” actually means:

  • Check if this supplier is on the preferred list (if yes, different handling)
  • Look at the amount (if above threshold, additional approval)
  • Check the currency (if not EUR, look up the internal exchange rate, which is in a spreadsheet, not the system)
  • Verify against the contract (but only if the contract was signed after the new policy took effect, otherwise use the old terms)
  • Check if this cost center has remaining budget (but the budget system is only updated monthly, so you need to check the informal tracking spreadsheet too)

Each of these “sub-steps” has its own exceptions. The person handling it navigates them unconsciously — they have become second nature. But they are invisible to anyone who was not trained by someone who learned them the same way.

When you automate the documented three-step process, you get the documented results: it works for cases that follow the three steps. Everything else — which is a surprising percentage — falls through.

Pattern 3: The Silent Reversion

This is the saddest pattern. An automation goes live. There is a celebration. Metrics show adoption. And then, quietly, over the following months, the team starts working around it.

They discover that the automation handles the standard cases but creates more work for the non-standard ones — because now they have to extract the case from the automation system, process it manually, and then re-enter the result. This takes longer than handling it manually from the start.

So they start pre-screening. Before feeding a case into the automation, they glance at it to determine whether it is “automation-friendly.” The complex ones they handle manually without telling the system. The adoption metrics still look fine because the easy cases still flow through. But the humans are doing the same amount of work — just with an extra triage step.

Nobody reports this because nobody wants to be the person who says the project failed. The automation becomes a fig leaf covering the same manual process.

Why I Left to Build Something Different

After twelve years, I understood the problem clearly. Automation projects fail not because of bad technology but because of insufficient process understanding. The gap between the documented process and the actual process is where value is lost — and that gap is invisible to anyone who does not look for it.

I also understood that the new generation of AI models — the large language models that can read, interpret, and reason about unstructured information — could close that gap in a way that previous automation tools could not.

Not through more rules. Not through more sophisticated scripting. But through genuine understanding: the ability to read a document the way a person reads it, to recognise that two differently worded items refer to the same thing, to determine what is probably right and what needs a human check.

I founded Jevolution because I wanted to build automation that starts with the process, not the technology. Automation that handles the 30% of exceptions that consume 80% of skilled time. Automation where the first step is always: understand what actually happens, not what is supposed to happen.

What “Process First, Technology Second” Means in Practice

When we start an engagement, we do not begin with a demo or a architecture diagram. We begin with observation.

We spend time understanding how work actually flows through the business. We talk to the people who do the work — not just the managers who oversee it. We map the exceptions, the workarounds, and the informal rules.

Only then do we design the agent. And we design it for the real process — including the messy parts. The agent knows that supplier X always sends invoices in a non-standard format. It knows that amounts under a certain threshold can be fast-tracked. It knows which exceptions it can handle and which ones need a person.

This is slower than selling a platform. It does not scale like a SaaS product. But it works — in the sense that the automation actually handles the cases that were consuming your team’s time, not just the cases that were already easy.

The Takeaway

Twelve years in enterprise IT taught me that the most sophisticated technology in the world cannot automate a process that nobody has properly understood. The models are powerful enough. The tools are mature enough. The bottleneck is always, always the gap between how we think work happens and how it actually happens.

That gap is where Jevolution works. Not because it is technically interesting (though it is), but because it is where the value is — for the business, and for the people whose expertise was being wasted on work that a well-designed agent could handle.

Every engagement starts with the same question: “What actually happens when something goes wrong?” The answers are always more interesting than the documentation suggests.

Next step

Let’s talk about your process.

If you have a workflow that consumes more time than it should, it is worth a conversation. We analyse your process and show where an AI agent has the biggest impact.