ru24.pro
News in English
Июль
2024

Artificial Intelligence? I think not!

0

Artificial <em>Intelligence</em>? I think not!

“The machine demands that Man assume its image; but man, created to the image and likeness of God, cannot become such an image, for to do so would be equivalent to his extermination”

(Nicolai Berdyaev, “Man and Machine” 1934)

These days, the first thing people discuss when the question of technology comes up is AI. Equally predictable is that conversations about AI often focus on the “rise of the machines,” that is, on how computers might become sentient, or at least possess an intelligence that will outthink, outlearn, and thus ultimately outlast humanity.

Some computer scientists deny the very possibility of so-called Artificial General Intelligence (AGI). They argue that Artificial Narrow Intelligence (ANI) is alone achievable. ANI focusses on accomplishing specific tasks set by the human programmer, and on executing well-defined tasks within changing environments, and thus rejects any claim to actual independent or human-like intelligence. Self-driving cars, for example, rely on ANI.

Yet as AI researcher and historian Nils J. Nilsson makes clear, the real ‘prize’ for AI researchers is to develop artifacts that can do most of the things that humans can do—specifically those things thought to require ‘intelligence.’” Thus the real impetus of AI research remains AGI, or what some now call “Human Level Artificial Intelligence (HLAI).

The central problem with such discussions about AI, however, is the simple fact that Artificial Intelligence does not exist.

To achieve this goal, AI researchers attempt to replicate the human brain on digital platforms, so that computers will mimic its functions. With increasing computational power, it will then be possible first to build machines that have the object-recognition, language capabilities, manual dexterity, and social understanding of small children, and then, second, to achieve adult levels of intelligence through machine learning. Once such intelligence is achieved, many fear the nightmare scenario of 2001: A Space Odyssey’s self-preserving computer HAL 9000, who eliminates human beings because of their inefficiency. What if these putative superintelligent machines disdain humans for their much inferior intellect and enslave or even eliminate them? This vision has been put forward by the likes of Max Tegmark (not to mention the posthuman sensationalist Yuval Harari), and has enlivened the mushrooming discipline of machine ethics, which is dedicated to exploring how humans will deal with sentient machines, how we will integrate them into the human economy, and so on. Machine ethics researchers ask questions like: “Will HLAI machines have rights, own property, and thus acquire legal powers? Will they have emotions, create art, or write literature and thus need copyrights?”

The central problem with such discussions about AI, however, is the simple fact that Artificial Intelligence does not exist. There is an essential misunderstanding of human intelligence that undergirds all of these concerns and questions—a misunderstanding not of degree but of kind, for no machine is or ever will be “intelligent.”

Before the advent of modernity, human intelligence and understanding (deriving from the Latin intellectus, itself rooted in the ancient Greek concepts of nous and logos) indicated the human mind’s participation in an invisible spiritual order which permeated reality. Tellingly, the Greek term logos denotes law, an ordering principle and also language or discourse. Originally, human intelligence did not imply mere logic, or mathematical calculus, but the kind of wisdom that comes only from the experiential knowledge of embodied spirits. Human beings, as premodern philosophers insisted, are ensouled or living organisms, or animals, that also possess the distinguishing gift of logos. Logos, translated as ratio or reason, is the capacity for objectifying, self-reflexive thought.

Moreover, as rooted in a universal logos, human intelligence was intrinsically connected to language. In this pre-modern world, symbols are not arbitrary cyphers assigned to things, as AI researchers have always assumed; rather, language derives from and remains inseparably linked to the human experience of a meaningful world. As the German philosopher Hans-Georg Gadamer explains, “we are always already at home in language, just as much as we are in the world.” We live, think, move, and have our being in language. As the very matrix that renders the world intelligible to us, language is not merely an instrument by which a detached mind masters the world. Instead, we only think and speak on the basis of the linguistic traditions that make human experience intelligible. And let’s not forget that human experience is embodied.

The only way we can even conceive of computers attaining human understanding is a radical redefinition of this term in functionalist terms.

No wonder, then, that human understanding, to use the English equivalent of the Latinate ‘intellect,’ has a far deeper meaning than what computer scientists usually attribute to the term. Intelligence is not shuffling around symbols, recognizing patterns, or conveying bytes of information. Rather, human intelligence refers to the intuitive grasp of meaningful relations within the world, an activity that relies on embodied experience and language-dependent thought. The critic of AI, Hubert Dreyfus summed up this meaning of intelligence as “knowing one’s way around in the world.” Algorithms, however, have no body, have no world, and therefore have no intelligence or understanding.

The only way we can even conceive of computers attaining human understanding is a radical redefinition of this term in functionalist terms. As Erik Larson has shown, we owe this redefinition in part to Alan Turing, who, after initial hesitations, reduced intelligence to mathematical problem solving. Turing and AI researcher after him thus aided a fundamental mechanization of nature and human nature. We first turn reality into a gigantic biological-material mechanism, then reconceive human persons as complex machines powered by a computer-like brain, and thus find it relatively easy to envision machines with human intelligence. In short, we dehumanize the person in order to humanize the machine. We have in fact, as Berdyaev prophesied, exterminated the human in order to create machines in the image of our de-spirited, mechanized corpses.

In sum, our problem for a proper assessment of so-called AI is not an imminent threat of actual machine intelligence, but our misguided imagination that wrongly invests computing processes with a human quality like intelligence. Not the machines, but we are to blame for this. Algorithms are code, and the increasing speed and complexity of computation certainly harbors potential dangers. But these dangers arise from neither sentience nor intelligence. To attribute human thought or understanding to computational programs is simply a category mistake. Increasing computational power makes no difference. No amount of computing power can jump the ontological barrier from computational code to intelligence. Machines cannot be intelligent, have no language, won’t “learn” in a human educational sense, and they don’t think.

As computer scientist Jaron Lanier pithily sums up the reality of AI: “there is no A.I.” The computing industry should return to the common sense of those AI researchers who initially disliked the label AI and called their work “complex information processing.” As Berdyaev reminds us with the epigram above, the true danger of AI is not that machines might become like us, but that we might become like machines and thereby forfeit our true birthright.

Featured image by Geralt (Gerd Altmann) from Pixabay.

OUPblog - Academic insights for the thinking world.

       

Related Stories