How important is hardware for AI development?
The development of AI software is accelerating because the self-learning speed of algorithms is increasing.
Therefore, the development towards AGI (Artificial General Intelligence) is increasingly limited by the fact that hardware computational power "only" follows Moore's "law" of doubling every 18 months. This, in turn, is approaching a physical limit. The most advanced ASML chips today operate at 3 nanometers and are expected to reach 2 nanometers by 2025. This corresponds to transistors with a width of 10-15 atoms. The physical limit is 1 nanometer, partly because the position of the electron becomes too uncertain for silicon below that threshold.
- Moore's law may be extended e.g. by using graphene instead of silicon. This would generate far less heat, making it possible to shift from GHz to THz.
- However, the elephant in the development room is growing bigger: the world of quantum mechanics.
Earlier this year, this sent OpenAI’s Sam Altman on a tour to several sovereign wealth funds (SWFs). He initiated seed round inquiries for the development of quantum computers. His visits to these SWFs are due to the enormous investment requirements.
But what needs to be developed to create quantum computers?
The standing joke about graphene, and especially about quantum computers, is that everything is possible, but only in the lab. Real-world physical conditions like temperature, electromagnetic radiation from devices such as mobile phones, and the Earth's small vibrations have so far blocked the development of commercial solutions. Additionally, quantum computers' data (i.e., state) exist only fleetingly (Heisenberg's uncertainty principle). Therefore, it is a fundamental and logical problem to ensure that all data are transferred and stored before they disappear again. This places significant demands on both hardware and software to handle quantum bits and perform quantum gates.
Will this bring us closer to solving the big problems?
Solving the above problems could also bring us closer to understanding one of physics' great mysteries, the fine-structure constant. It is a dimensionless mathematical constant that can express the coupling strength of charged particles (protons and electrons) to the electromagnetic field. However, it also connects the fundamental forces of nature. It expresses the relationship between electromagnetism and gravity, as well as the strong and weak nuclear forces. Moreover, it is related to fundamental constants such as the speed of light, Planck's constant, and elementary charge. Finally, it links quantum mechanics with particle physics, as it is a measure of quantum fluctuations and interactions. Therefore, it appears in formulas for energy levels, scattering cross-sections, and decay rates in quantum systems.
The fine-structure constant can be calculated to a number that approximately corresponds to the fraction 1/137. 137 is a prime number and even a so-called Stern prime, which meets several mathematical criteria. A fraction of a "fine" prime number is thus the mathematical relationship that currently presents some of the greatest challenges for the development of quantum computers. But we only understand the constant as a mathematical relationship. Therefore, it remains one of physics' great mysteries.
However, if we understand it, we come closer to a unified understanding of the world of physics, a TOE (Theory Of Everything). We might even be able to reconcile Bohr and Einstein’s agreement to disagree before the 100th anniversary of the Solvay Conference in 1927.
But Likely Not for that reason alona
However, this will probably not lead to a final agreement on whether the world is fundamentally deterministic or not. Determinism means, among other things, that the computer calculating the future calculates the same future every time. Our perception of identity as humans would need to be redefined if free will is merely an illusion.
When Alan Turing defined his AI test in 1950, he began with the words: "I propose to consider the question: Can machines think?" This question later formed the basis for the Dartmouth Conference in 1956, where the term artificial intelligence was used for the first time.