A Critical Look at AI in Marketing and Generative AI
Of Monsters and Ants – Why Both Are Needed in AI-Supported Marketing and Customer Experience (Part 1 of 3)
Artificial intelligence (AI) has long since become an integral part of our everyday lives and is becoming increasingly important – including in marketing and the digital customer experience (CX). Modern generative AI tools can be used with just a few clicks, whether for text creation, data analysis or image generation. However, as AI becomes more widespread, a crucial question arises for companies: how can AI be used sensibly and responsibly at the same time?
I addressed this question in my presentation “Of monsters and ants – why AI-supported marketing & CX needs both”, held on March 11, 2025. In response to many requests, I have prepared the central topics and statements of the presentation for you in a three-part blog series – well-founded, understandable and practical.
You can find the full presentation on the where you can also discover more exciting content about digital transformation, AI and customer experience. Feel free to drop by!
The Democratization of AI – Who Gets to Play?
AI is no longer a niche topic. Previously used exclusively in research institutions or by tech giants, generative AI systems and AI technologies are now also available to small to medium-sized companies and the general public. This trend towards democratization means that more and more people can use powerful AI – without any in-depth technical knowledge.
The field of generative AI in particular shows how much our use of technology has changed. AI tools such as ChatGPT, Google Gemini and Adobe Firefly make it possible to generate content – be it text, images or sound – in a matter of seconds. According to a survey by Statista, around 48% of Germans surveyed used ChatGPT in 2024, compared to 19% in 2022. The use of these technologies has therefore more than doubled in a short space of time. (Source Statista)

This development brings with it many new opportunities, but also obligations and challenges. The more intensively generative AI is used in marketing or other areas, such as customer service or content management, the greater the responsibility. It is not enough to simply use AI tools – a conscious and strategic examination of their risks, possibilities, quality standards and limitations is required.
When AI Gets Out of Control: Risks & Opportunities
The parcel service provider DPD provides a striking example of how quickly AI systems can take on a life of their own. The company had an AI-based chatbot in its customer service department to process inquiries. But instead of providing helpful answers, the bot suddenly insulted the company itself in the chat.
DPD reacted quickly and deactivated the system immediately. But the damage was done: The negative reactions on social media were not long in coming. A shitstorm broke out, the media took up the case and trust in the brand suffered visible scratches. (Source)


DPD chatbot insults own company and has to take bot offline.(Source images)
Such incidents illustrate how important a well thought-out approach is when using AI, such as here in customer service and AI marketing in the form of a chatbot. In addition to short-term reputational damage, there can also be long-term consequences – from loss of trust to loss of sales and even legal problems. Companies need to be aware that uncontrolled use of AI can have serious consequences.
Hidden Errors – When Generative AI Distorts Facts
The risks in dealing with AI are not always as obvious as in the case of DPD. Errors often remain undetected for a long time and cause insidious damage.
The generative AI tool ChatGPT provides an illustrative example: if you ask it why the German graphic designer Otl Aicher studied sculpture, you will receive – depending on the wording – one correct and one incorrect answer. Simply adding the word “none” suddenly leads to a significantly different statement.
What becomes clear here: Generative AI is not primarily aimed at content accuracy, but at plausibility and user satisfaction. Content often appears credible – even if it is objectively incorrect. The algorithms are trained to appear as plausible and helpful as possible – even if the facts are not correct.
The generative AI tool ChatGPT from OpenAI suddenly responds completely differently just by adding the word “none” – which means the answer is no longer correct.
If you imagine that misinformation is constantly being spread in areas such as AI-supported marketing, this could lead to significant problems in the long term. Such inconsistencies disrupt the customer experience and can quickly unsettle users. It is particularly problematic that this type of error is hardly noticeable at first glance. Companies often only notice them when complaints start to pile up or trust is already compromised.
It is precisely these subtle inconsistencies that harbor an enormous risk: they are difficult to detect, but have a major long-term impact on the external perception of a company.
Deceptively Real – Why AI Content Does Not Equal Good Content
Many companies use AI tools to create content quickly and cost-effectively. However, without a clear content strategy and editorial quality check, there is a risk of a flood of superficial or incorrect AI content.
AI-based content often looks impressive – but this does not automatically mean that it is correct or of high quality. Especially in areas where users have little prior knowledge, AI can be very convincing but misleading.
This effect is particularly evident in content marketing. Many companies use AI to produce large amounts of content in a short space of time. However, quality, relevance and fact-checking are often neglected in the process. The result: a growing amount of AI-generated content that appears superficial or simply contains incorrect information without a concrete strategy behind it.
Journalist Joseph Cox takes a critical look at this trend in his article “Google News Is Boosting Garbage AI-Generated Articles”. In it, he shows how Google is seemingly uncritically feeding AI-generated content into its news platform – often without checking whether it originates from humans or machines. Frequently stolen content is automatically rewritten and reused with the help of AI. The result is a Google News Feed with AI-generated “bullshit content”, which leads to an unsatisfactory customer experience for many users.
Another source, NewsGuard, found that there are over 1,200 websites on the web distributing unreliable content created with AI tools in numerous languages – a worrying trend with far-reaching consequences for information quality and media literacy.
Generative AI – A Powerful Tool or an Unpredictable Monster?
All of these examples lead to one key insight: Generative AI is an extremely powerful tool – but it can quickly turn against its own goals and intentions if it is not implemented thoughtfully. It has the potential to promote efficiency and innovation, but without clear control and strategy, it can also cause massive damage.

If you compare generative AI to a monster, you hit the nail on the head: it is powerful, tireless and capable of achieving great things – but if you unleash it without knowing the rules, it becomes a risk.
The solution lies in understanding this technology not only technically, but also culturally and strategically. It is not enough to view AI as a universal solution – it must be integrated into the company’s DNA, continuously reviewed and optimized for the customer experience. In particular, AI in customer communication and marketing can cause long-term damage if used incorrectly.
Outlook: Taming AI – How Companies Can Use AI Responsibly
In the next part of this blog series, we will show how companies can “tame” generative AI: with clear processes, targeted use cases and a critical look at opportunities and risks.
In the third and final part, we then go one step further: we explain concrete application examples for successfully implementing a reliable and controlled AI strategy – specifically using the example of AI-supported marketing.
Stay tuned – parts 2 and 3 will follow shortly.
Contact us today to discuss your business case and work together to develop a customized AI solution that will drive your business forward!

CEO of MORESOPHY
Heiko Beier is a professor of media communication and an entrepreneur specializing in data analytics and artificial intelligence. As an expert in cognitive business transformation, he supports companies in various industries in the design and implementation of digital business models based on smart data technologies.
More articles from Responsible AI

|
|

|
|
