In July, it is 111 years since the renowned journal "Philosophical Magazine" printed the first major public article by Niels BohrIn this article, he postulated, among other things, that light is emitted from atoms when an electron jumps from one orbit to another around the atomic nucleus, and not, as was otherwise "known," when an electron has completed a full orbit around the nucleus. Most were offended by this upstart's "pollution of the literature, revealing an ignorance of our knowledge of spectral light." Others dismissed it as trivial pedantry, for the light was there regardless.
Is AI merely a product of probability?
But it is precisely this understanding of the fundamental mechanism of quantum leaps that has enabled the development of laser light, scanners, etc., and thus the digital revolution.
Quantum mechanics became Niels Bohr's central research field for the rest of his life, and his formulation of the principle of complementarity became central, i.e., the acceptance that an electron sometimes has wave properties and other times particle properties. This means, among other things, that the probability of an outcome is sufficient; a law cannot always be established.
At the time, it was dizzying to postulate that reality is something as absurd as "merely" a product of probabilities, no laws. Especially Einstein and Schrödinger could not reconcile with it despite a convincing mathematical proof. Einstein refused to believe that the facts of the world could be so contrary to common sense. There had to be something deeper.
Bohr's and Einstein's long discussions concluded with Einstein's reluctant acceptance of the theory's validity. But he never reconciled with the logic and repeatedly told Bohr the now famous words: "God does not play dice with the universe." Bohr's reply was: "It is not for us to tell God how the world should be arranged."
Is it unreal because it is complex?
Bohr's ideas still stand today. Their lack of simplicity and natural logic can still make most people dizzy and reluctant. But it is, as mentioned, these principles that form the basis for the diodes and transistors that make up microchips. Microchips are the muscles behind AI's algorithms, which clone and will soon surpass human recognition and abstraction abilities. What makes us dizzy about quantum mechanics will also make us dizzy about AI. For AI works morphogenetically, and therefore the algorithms vary, among other things, with the degree of self-learning. Just like humans.
But even though something is complex and incomprehensible, it does not have to be wrong. When linear laws cease, our willingness to accept is challenged. Our brains reward simplicity. But does it become wrong simply because the answers are associated with probabilities? Is it wrong simply because it lacks linearity? Does the present depend on how we measure it, i.e., interpret it? Is the future solely a product of risk and thus probabilities, or can we influence it?
AI er en “black box”
When the CEO of OpenAI (ChatGPT), Sam Altman, testified before the U.S. Congress at the end of May, he repeatedly said that NO ONE understands how AI actually works. He used the term "black box." One can speak about the quality of the data provided to AI, but how the "black box" processes them each time is something one can only assume. The output from the "black box" is associated with probabilities. Just like quantum mechanics.
It is equally dangerous to rely on AI as it is to reject AI simply because it lacks lawfulness. The only thing that seems certain is that AI is here to stay. It will affect our perception of reality.