|

June 2, 2025

|

8 min. read

Why So Many GenAI Projects Fail and How This Can Be Prevented

According to the analyst firm Gartner, more than half of all initiatives to introduce generative AI are on the brink of failure. Why? And is the hype built up by the hyperscalers with billions of euros of investment at risk of fizzling out? The answer is once again “yes and no”. But first things first.

 

The Market Development Around Generative AI

It is clear: generative AI in the form of large language models has a disruptive character that goes much deeper than most users of ChatGPT and other internet services can realize. The technology offers the potential to automate entire processes and increase the productivity of companies. It is predicted that companies using AI will be able to increase their profitability by 38% by 2025. However, the investments are initially being made by large cloud providers and their capital providers. Also technology-related: generative AI is extremely resource-intensive. OpenAI currently has capital of USD 40 billion from the last financing round in spring 2025. Microsoft has invested more than USD 13 billion in AI in the last two years and is planning to invest around USD 80 billion in AI data centers. The trend is not limited to the USA. Even in Germany, the largest investments in recent years have gone to Aleph Alpha, a company known for having developed its own LLM.

 

Can So Much Money Be Wrong?

The answer to this is: “yes”!

Generative AI would not be the first capital market-driven technology bubble to burst. Generative AI is largely a market strategy of large technology companies. With a big flowery promise that the technology can do just about anything with ease. To achieve this, companies would have to do nothing other than move all their data to the cloud, because that is the only place where this is possible. The providers earn money in several ways. Not only are the costs of migrating existing systems to the cloud complex and expensive. The costs of operating cloud applications are also higher on average than running them in-house – not to mention the risks of dependency. These costs then explode further, as the costs for the ongoing use of AI are added on top of this. As generative AI is the universal “always-on-hand” tool that can (or is supposed to) be used for everything, the cost spiral is unpredictable. Apparently, many companies that jumped on this bandwagon quickly and without examining alternatives are now at precisely this point.

 

Does This Mean That Investments in Projects With Generative AI Are Inevitably Doomed to Failure?

The answer is just as clear: “no”!

To this end, it is first important to understand the causes of the poor profitability of AI initiatives. These are

    1. Exploding costs
    2. Bad data
    3. Lack of measurability

All these problems are not independent of each other. In fact, they reinforce each other. But first things first:

 

Exploding Costs of Generative AI

I have already briefly outlined the problem above. The core problem is the erroneous, but from the providers’ point of view justified, hypothesis that generative AI is the universal tool for all conceivable problems in dealing with data and knowledge. But does that really make sense? Which craftsman can get by with just one tool on the construction site? The situation is similar in the transport and mobility sector. There are demonstrable reasons when, where and why different types of engines are used in different areas of transportation. Diesel engines in the commercial sector and for heavy loads, petrol engines for small cars and electric drives for passenger rail transport. Over long periods of time, these developments have emerged as the best in each case. In the field of AI, however, an immeasurable amount of money is being spent on pushing a single technology onto the market.

 

Poor Data Quality

Almost everyone who deals with data in a company is familiar with it: poor data quality. Whether in the CRM database or the master data for your own products and goods – there are always problems. Either entries are missing from data records, data is out of date or – the most difficult case – there are multiple entries with contradictory information (“What is correct?”). When processing such data, even a supposedly infinitely intelligent AI is completely confused and will never give correct, reliable answers. So anyone expecting generative AI to simply solve all problems will be disappointed.

AI is certainly capable of solving data quality problems. However, it is not generative AI, but other algorithms that work much more efficiently in the field of Machine learning. Before data is even handed over to generative AI, it should be prepared in such a way that it has a form that is suitable for solving the task. Perfect data quality is also never required. The data only needs to be available in such a way that it can be interpreted well and uniformly. This data preparation is far more efficient and reliable with analytical AI, which – like industrious ants – performs its work in a disciplined manner and ensures order. Incidentally, with only around 3% of the energy input of generative AI and in such a way that the quality and progress of results can be systematically measured.

This brings us to the third point:

 

Lack of Measurability

The use of a technology is always subject to a purpose and should be linked to economic efficiency. This can always be determined via corresponding Key performance indicators (KPI) measure. Even measuring the impact of the use of AI on the KPIs is often omitted. More importantly, however, it is of little use to measure the process KPIs if I have no systematic set screws in the background that I can vary and understand the effects of the changes in the quality of results. In professional circles, this is called “Trusted AI / Trustworthy AI” called. Only a complete end-to-end view of the data, from the raw data to the output of the AI, enables this traceability and thus an understanding of what the AI really does or why it unfortunately does not do what I expect it to do.

Incidentally, all of these points also contribute to the phenomenon of “Help, my AI is hallucinating”. We will look at this aspect in more detail in a later article.

Conclusion: Generative AI Creates Huge Benefits — But Not on Its Own

Generative AI is not a universal tool or a problem solver for all cases. Its disruptive character lies in the fact that it is equipped with comprehensive knowledge and can handle any form of data spontaneously. It understands natural language in every conceivable way and thus offers a completely new user experience: instead of pressing buttons or searching, we simply communicate with computers. But to believe that you can create more and more intelligence with more and more money and more and more energy (Donald Trump wants to double the energy consumption of the USA just for the needs of generative AI!) is naive, if not stupid. Human intelligence requires very little energy and has evolved over billions of years.

The moresophy AI platform was not created in a very short space of time with a lot of money in order to occupy a market. It has matured over more than 20 years through experience in solving hundreds of use cases. It offers the full range of tools, procedures and measurement instruments to analyze specific problems in processes – as well as in the data that drives them – and to resolve them using AI-supported data processing. All of this is measurable and always pragmatically geared towards the task at hand.

MORESOPHY offers the only platform that combines genAI, Knowledge Graphs and Responsible AI in one integrated solution – for truly intelligent applications without hallucinations on your own data. Instead of a pure technology kit, it provides a coordinated set of processes and applications that enable specific challenges to be solved efficiently using the appropriate means.

Portrait von Prof. Dr. Heiko Beier

CEO of MORESOPHY

Heiko Beier is a professor of artificial intelligence and digital communication and a tech entrepreneur specializing in AI-supported data analytics. As an expert in data-driven transformation, he supports companies in automating knowledge-intensive processes. His expertise includes the application of explainable and trustworthy AI along the entire value chain – from customer communication to core processes.

More articles from Responsible AI

Foto von GenAI auf einem Smartphone
Prof. Dr. Heiko Beier

|

June 2, 2025

|

8 min. read
Scroll to Top
Cookie Consent Banner by Real Cookie Banner