AI consciousness

Will AI Dull Our Consciousness?

There is a risk that AI dulls our consciousness, and thereby our development, by making life too convenient. We risk forgetting to challenge ourselves, causing our cognitive abilities to decay.

AI Offers Enormous Potential, ...

AI brings huge opportunities to increase our efficiency and productivity by taking over repetitive or uninspiring tasks in our daily and professional lives. This can free up time and energy for innovation, for example. As a result, we are increasingly relying on AI..

However, that narrative is not necessarily so straightforward in practice. There’s a risk that AI makes life so comfortable that we overuse it, gradually losing our awareness and capacity for critical thinking. The brain is a muscle, it needs to be challenged to remain strong.

... But Comes with the Risk of Loss of Consciousness, ...

This phenomenon is deeply rooted in human history and has long been recognized. Already in antiquity, Socrates criticized the spread of writing in the dialogue Phaedrus. He believed that written language would weaken memory and inner wisdom. However, it wasn’t until the late 1980s that cognitive psychology began systematically investigating external memory and technological dependence. In the 2000s, this led to the emergence of the concept Cognitive Offloading.

In 2020 , a study concluded that habitual use of GPS reduces our general awareness of where we are Our spatial memory and consciousness fade. The problem is exacerbated by the fact that we don’t notice this decline. On the contrary, we believe we remain aware.

... Which Is the Prerequisite for Critical Thinking, and Thus Development

The more AI boosts our efficiency and creativity, the more our brains and cognitive abilities risk deteriorating, without our realizing it. A brain is a muscle that weakens when it is not challenged. Pathologically, one can even observe structural and functional changes in brain tissue and cells. The posterior hippocampus shrinks—similar to what is seen in patients with Alzheimer’s disease..

The more we use AI to handle the trivial and repetitive tasks of our lives and work, the greater the risk of cognitive offloading. We far too easily rely on AI uncritically—even though we know that answers from AI systems like LLMs (large language models) are sometimes incorrect or misleading.

Language Model Pose a Particular Risk, ...

A BBC investigation from February this year showed, for instance, that half of the responses from ChatGPT, Gemini, and others contained “significant issues of some form.”

The problem often escalates when one engages in a longer dialogue thread with an LLM. A so-called model collapse can occur, meaning the model’s foundation for its responses becomes too thin. The conversation narrows to the point that the model begins to hallucinate facts, which it synthesizes into answers. AI LLMs are designed so that they (almost) never admit when they lack sufficient empirical grounding to respond accurately.

Abroad, this has led to wrongful arrests and convictions in cases where police and local courts relied uncritically on facial recognition software to improve efficiency. But we are presumed innocent until proven guilty..

... Because They Are So Convenient to Use

One of AI’s strengths is that it is well-formulated and hyper-personalized. Over the past 30 years, we’ve seen this develop through predictive algorithms in search engines and online shops. AI will accelerate this trend even further. Today, this is illustrated by the concept of algorithmic complacency. We often accept the videos YouTube or other social media platforms suggest for us, rather than actively deciding what we want to watch. Our initiative for seeking knowledge or entertainment thus disappears, because passive consumption is more convenient..

Our use of AI continues to increase. For example, a study reported in The Guardian this year showed that 92% of students have adopted AI at such an explosive pacethat examiners should “stress-test” exam responses. Similarly, Satya Nadella recently stated that between 20% and 30% of Microsoft’s code is already written by AI.Microsoft expects that number to reach 95% by 2030.

This Raises the Risk of Losing Our Grasp on Facts and Reality, in Some Areas, at Least

These issues can become self-reinforcing. A study by Amazon Web Services this year estimated that up to 57% of all new information on the internet is now generated by AI itself. This poses a risk of a decay spiral, as the factual basis of the internet erodes. Self-reinforcing, hallucinated syntheses can arise—while our capacity for critical thinking diminishes..

Still, it’s important to remember that both the internet and AI are here to stay. Since the Enlightenment of the late 1700s, distributed knowledge has been a crucial driver of innovation and prosperity. But uncritical use of AI may lead to self-destructive feedback loops.

AI is not a knowledge base. It is a statistical model of knowledge bases.

Related posts

Can AI predict us, and can it manipulate us?

If AI can get to know us so well that it can hyperpersonalize its communication, ...

Is Geothermal Energy on the Brink of a Breakthrough?

Geothermal energy has recently experienced major breakthroughs that significantly expand its role in future climate-friendly...

Does AI Challenge Fundamental Trust?

"Does AI challenge basic trust?" was the title for my article in renowned FinansInvest 3/2025. The summary ...

American Politicians Disagree on Stablecoins

American politicians disagree over stablecoins because they may become the tool for deregulating the ...

U.S. Megabanks Are Betting on Stablecoins

The U.S. megabanks are betting significantly on stablecoins, which promise a technologically efficient ...

Are US Trade Wars heading for Agricultural Products - part 2

The US trade wars started by focusing on countries. Here negotiation positions are unclear, as there are many...