AI is not an IT project, but a process of change

AI is not an IT project, but a process of change

Pitfalls, prerequisites, and why AI is not about tools but about structure and mindset, explained by Dirk Wolters from NeTec in an interview.

Fifty-six per cent of managing directors in the German healthcare sector plan to invest in generative AI technologies within the next three years, despite the fact that many AI projects in hospitals currently fail. How can this be explained?

Dirk Wolters: Because the starting point is often the wrong one. Instead of beginning with a clearly defined objective, everything starts with a desire for innovation or simply with the purchase of software. That may sound modern, but it rarely leads to lasting success. What is needed is a realistic understanding of one’s own organisation: Which processes are documented, and how? What data is actually available, and in what condition? Is there a shared understanding of what exactly is meant to be achieved?

This clarity is often lacking. Projects are launched under buzzwords such as “We want to automate processes” or “We want to try out AI.” Such phrases may be easy to communicate to committees, but they do not form a solid foundation for a project that consumes resources and deeply affects established workflows.

Another issue is the lack of responsibility. Who provides the technical leadership for the project? Who is responsible for the data? Who verifies whether the results are even reliable? These questions often remain unanswered or are simply passed on to the IT department. But AI is not an IT issue. It directly impacts patient care, daily nursing routines, and medical responsibility – which is why it must be approached and implemented in an interdisciplinary manner. Without this integration, AI remains a theoretical exercise, with the risk of having no impact in day-to-day operations.

What are typical misconceptions?

Dirk Wolters: A widespread fallacy is the idea that AI can simply be “added on top.” But that’s exactly what doesn’t work. If AI is not embedded into existing processes, a media discontinuity arises where the results do not fit the logic of the workflow, are not understood, not used, or even rejected. And that has nothing to do with a fear of technology – it’s based on experience. People working in hospitals quickly notice when a tool offers no practical benefit and creates additional effort.

Another misconception is the belief that AI will automatically learn and improve on its own if you just “let it run.” In reality, it’s the opposite: machine learning requires curated data, clear structures, oversight, and, if necessary, correction. If these prerequisites are lacking, better outcomes do not materialise – rather, systematic errors develop, which are particularly critical in a medical context.

Another frequent misjudgement is the overly optimistic assessment of one’s own data. Many hospitals assume their local data is sufficient to train an AI model. But it isn’t. Typically, the data volume is too small and too specific to develop robust models. For that, broader data pools with shared standards and a coordinated data architecture are required. Data protection, interoperability and governance are not side notes here – they are essential foundations.

What does it mean for a hospital to truly be ready for AI?

Dirk Wolters: Above all, it means that the conditions are in place not just to introduce AI, but to use it meaningfully. That may sound simple, but in practice, it is a true maturity process.

From a technical perspective, it means the hospital has an IT infrastructure that standardises data, links systems, and makes information machine-readable. There are defined interfaces, clear data formats, and the ability to aggregate information from multiple systems.

Organisationally, clearly defined roles are needed: Who is responsible for data quality? Who assesses whether a model makes clinical sense? Who is accountable when decisions are based on AI recommendations? These points must not only be clarified but also supported by senior leadership.

Then there is the strategic dimension: it is not enough to implement AI “because everyone else is doing it.” A clear vision is needed: What problem are we aiming to solve? What benefits are realistically achievable? And how do we align that with our resources, culture, and processes? Only when these questions are answered can a project truly be integrated into everyday practice.

And how can processes be designed to be AI-compatible?

Dirk Wolters: The key lies in a structured review of your own workflows. Many hospitals have processes that may “work” in practice, but are not documented in a way that makes them compatible with AI. For example, there may be no consistent terminology, no accurate time stamps, and no continuous digital records. That may be manageable in daily operations, but it is a deal-breaker for AI.

What’s needed is clean process modelling: What happens when, by whom, for what purpose, and what data is generated in the process? This data then not only needs to be collected but also contextualised. That only works if the process is not viewed in isolation, but as part of a broader care pathway. At the same time, the burden on staff must not increase. An additional input screen will not be used if it brings no immediate benefit. That’s why intelligent data capture is essential – for example, by optimising existing documentation points so that the data can be directly used for analysis and automation. That not only helps the AI, but also improves the quality of care and eases the workload for staff.

Is now even the right time to get started with AI?

Dirk Wolters: Absolutely. Especially now. The technology is progressing – that is undisputed. But the structures needed to use it effectively do not appear overnight. A hospital that today has no documented processes, no data strategy, and no clear governance will still not be able to simply “switch on” AI in three years’ time.

The right approach is not to wait for the perfect system – but to start now with realistic projects. For example, you might begin with a structured process in the emergency department or with automated diagnosis capture. Such projects not only provide immediate value but also build experience, trust, and organisational maturity. AI will not revolutionise healthcare. But it can help make it more efficient and safer – provided we create the right conditions. And for that, there’s no need to wait. We must start – systematically and strategically.

Dirk Wolters

Dirk Wolters
is owner, managing director, and head of the Consulting division at NeTec GmbH.