If AI can learn to know us so well that it hyper-personalizes its communication, can it then also predict us, and thereby manipulate us?
This is the first post in a trilogy about how AI can influence our perception of reality. It explores whether, if AI gets to know us deeply enough to hyper-personalize its communication, it can also predict us and thus manipulate us. The next post will focus on the fact that AI often learns from data it has generated itself. The final post addresses the fundamental dilemma of what is real at all, for example, whether humans have free will or if reality is fundamentally deterministic.
The Use of LLMs Is Rising Rapidly ...
Since ChatGPT was launched in late 2022, the use of large language models (LLMs) has grown exponentially. ChatGPT, for instance, is the fastest-growing app in history, reaching reaching 100 million users in just two months. By mid-February 2025, it surpassed 400 million weekly active users.
... But Being Convenient and Convincing Doesn't Make Them Truthful
The success of LLMs is largely based on convenience: they provide one well-structured answer instead of requiring users to sift through 100 links from Google. The answers are also exceptionally articulate and coherent. As a result, users tend to attribute a high degree of truth to them.
Beyond mastering the "rules of language," these responses are hyper-personalized—tailored to individual preferences and behavior patterns to a degree beyond any previous technology. This raises concerns about how easily LLMs and AI more broadly can be used as central tools for malicious purposes.
Recent History Shows How It Can Be Used for Political Manipulation …
Back in 2018, researchers from Oxford, Cambridge, the Future of Humanity Institute, and OpenAI outlined a number of scenarios. They focused on how AI could be misused for malicious purposes such as hacking or broader cyber-offense, including politically motivated disinformation.
At that time, the Cambridge Analytica scandal (2016–2018) was still fresh. There, AI models using psychometric data had already made it possible to hyper-personalize political messaging (e.g., using OCEAN personality profiles). It would be naive to think that such use of AI only arises in the U.S. or only by one political party. It was simply there that the first structured application was exposed.
… And the Potential Grows As LLMs Become More Hyper-Personalized
Since 2018, AI’s ability to hyper-personalize has continued to evolve. In July this year, researchers from the Institute for Human-Centered AI published a study in Naturedescribing a cognitive foundation model called Centaur. Centaur was trained on Psych-101, a large dataset containing data from over 60,000 people making millions of decisions across 160 experiments.
The researchers found that the model outperformed all previous models at predicting human behavior—even in scenarios it had never seen before. This suggests that the more access we give AI to our thoughts and decisions, the better it will become at predicting our everyday choices.
On the One Hand, This Opens New Possibilities for General Psychological Insight …
Previously, psychological models have either been explanatory (theoretical understanding) or predictive (machine learning accuracy). Centaur combines both approaches by identifying decision strategies and showing where classical theories fall short. The researchers believe Centaur can be used in clinical contexts to understand decision behavior in conditions such as depression or anxiety.
They urge broader societal engagement on how this technology is used, especially in education (risk of indoctrination), health (risk of "infodemics"), political communication (risk that demagogues become despots), and social media (undermining the democratic foundation of facts and their dissemination).
… But the Risks Are Increasing at the Same Time
In parallel with the Centaur development, other researchers studied the r/ChangeMyView community between November 2024 and March 2025 to see whether AI could alter people’s views and attitudes. The results showed that AI demonstrated a strong and superior persuasive ability—and thus a strong potential for manipulation in online debates. This opens the door for the spread of disinformation and election interference.
The researchers recommended that platforms implement mechanisms for detecting AI-generated content, requirements for transparency, and methods for verification and labeling of AI-generated comments.
There’s No Easy Way to Reduce the Risk Elements
Hyper-personalization is convenient for users. There is therefore a fundamental market pull to continue this development. At the same time, the technology offers enormous therapeutic potential in psychology, psychiatry, education, and more, only increasing users’ desire to engage with LLMs.
The downside is that the risks are rising sharply as well, because developing LLMs is extremely capital-intensive and thus concentrated in just two countries: the United States and China. One insists on freedom of speech regardless of truth, the other insists on censorship.
That leaves it up to each individual to filter the input we receive from LLMs.