Central Brain of Humanity
23 Jun 2025
There seems to be a lot of misunderstanding regarding GenAI. Overall, benefits of GenAI are vastly overrated, while the limitations are not clearly understood. Let me digress a bit.
Back in 1980s, Czechoslovak television broadcasted an excellent sci-fi series "Návštěvníci" (Visitors). The series starts in year 2484, in an utopia supported by Central Brain of Humanity (CML - Centrální Mozek Lidstva). Central Brain of Humanity is a supercomputer, capable of superhuman intelligence. Its insights have brought peace, prosperity and safety to the humans of 25th-century Earth.
It looks to me that the general public thinks GenAI is some kind of Central Brain of Humanity. Quite surprisingly, even many people with technological backgrounds seem to think about GenAI in a similar way. However, current GenAI is lightyears away from human intelligence, let alone superhuman intelligence. GenAI does not really think. Certainly, it can talk, paint, create music, and do a lot of other impressive things. Yet, it cannot really think.
Large Language Models (LLMs) that are at the core of the mainstream GenAI systems are just sophisticated language processors. The LLMs do not understand what an "orange" is. They do not understand that it can refer to both fruit and color. They really understand nothing. All they do is to relate the word "orange" to other words, mostly words that it has seen during its training. Certainly, if you ask LLM to explain what an "orange" is, it will (correctly) describe it as fruit, color and tree. However, this answer is not based on understanding. It is based on content of dictionaries and encyclopediae that the LLM processed during the training. It does not describe "orange" as fruit, color and tree because it understands these concepts. It describes it in this way because it has seen these words used together during its training.
LLMs are repeating what they have seen. AI critics like to joke that LLM is just a glorified autocorrect. That statement is not entirely wrong. LLMs are excellent at talking, which makes an impression. Unfortunately, they are much worse at doing, such as providing insights, information or knowledge. Would you rather rely on an grumpy old expert with deep understanding of the subject matter, or a gentle smooth-talking performer who has no idea what he is talking about? I guess the answer is very clear. General public is going to choose the dim-witted smooth operator every time. This is the danger of GenAI.
Current AI is no Central Brain of Humanity. It is quite limited, biased, hallucinating language processor with very limited transparency, and significant environmental impact. However, the LLMs can still be useful, when used correctly. The problem is that it is very difficult to use them correctly. The key is in understanding the limitations of the technology, and resisting its influences to lead you astray from robust knowledge and facts. However, this is much harder to do than it seems. Many people are going to learn this the hard way. Even more people are not going to learn that at all, to the detriment of us all.