|
THE PURSUIT OF WORLDLINESS by Barry Edelson Artificially Yours
AI hype and hysteria
Notwithstanding the misnomer by which we refer to artificial intelligence, there is no such thing as artificial knowledge. All of the world's knowledge is human knowledge. Every single piece of usable information, in whatever form, is the product of human observation, exploration, discovery and imagination. Whether etched in stone or engraved on clay tablets, written on paper or parchment, recorded as sound waves or video images, sorted into bits or merely spoken aloud, the storehouse of knowledge on our planet is solely the product of human thought and human communication. Though there is a vast reservoir of data in the genes and cells of other earthbound creatures, and in the air, ground and water that supports and surrounds them, humanity possesses the only known means of examining and transmitting that data. The cosmos may teem with an infinite array of unfathomable phenomena, but the only entity here in our terrestrial habitat that is capable even of noticing and articulating the existence of the natural world, in all its astonishing variety, is the mind of Homo sapiens. The rapid diffusion of AI into the technosphere has been accompanied by a great many claims about its potential usefulness and the breadth of its impact, along with a parallel anxiety about its possible, and perhaps existential, dangers. But there are several fundamental constraints on the practical application of AI that lie somewhere between self-serving cheerleading on one extreme and hysterical hand-wringing on the other. AI may indeed prove to have a deep and lasting impact on our societies, and it may also prove to be a troublesome genie that cannot be put back into its bottle. But its inherent nature — what it is, how it functions, and who is behind it — makes it unlikely either to usher in an entirely new era of existence or the imminent demise of life on Earth. First and foremost, if all of the outputs generated by AI are entirely dependent on the knowledge accumulated by humans (so far), then there is a practical limit to how much invention and discovery it can actually attain. Let us suppose for a moment that Homo sapiens surrenders the field of scientific discovery altogether to artificial intelligence, and ceases to find and disseminate knowledge on its own. If AI picks up the ball from here, so to speak, and we contribute nothing more of value to the world's repository of information, then all creation going forward only has our unquestionably flawed and incomplete foundation upon which to build its as yet unfathomed edifices of brilliance. Won't the future look uncannily like the fever dream of early 21st century humans, who will be the last generation (supposedly) to do any work or conjure anything new from the raw material of life? Wouldn't all future knowledge and creation just be an endless rehashing of what happens to exist right now? How narrow the view from here, and how sad! AI's large language models (LLM's), like all inanimate machines, have no direct experience of the universe, nor any means of experiencing it except by those means that humans provide. It does not have some kind of magical access to untapped sources of data that are superior and larger than that which has been accumulated by our species; it is merely able to perform searches and calculations based on existing data far more quickly and comprehensively than "normal" computing has been able to do. To be sure, this facility is already proving to be of enormous value in many fields of endeavor (even though, in its infancy, it seems prone to generate a good deal of twaddle and gibberish). It is hard to see how it can be a generator of genuinely and entirely new information. Can it perform its own research? Can it observe the cosmos? Or the microscopic and subatomic realms from which the world derives its essence? Much has been made of AI's "ability" to paint pictures, write songs and poetry, and produce essays and reports. These products may look and sound uncannily like the original human inventions from which it has "learned" to create its own works, but they are by definition, and necessity, derivative. An LLM literally cannot "see" or "hear" anything other than that which we have fed into its "brain". In theory, we could train an advanced computer to peer through a telescope or a microscope (no doubt someone has already done this) but the machine only knows where and how to look, and how to analyze and organize the data it finds, because we have programmed it to do so. The worn out maxim of the computer age, "Garbage in, garbage out", most definitely applies. Adam Mastroianni hit this nail squarely on the head in a recent post: If you booted up a super-smart AI in ancient Greece, fed it all human knowledge, and asked it how to land on the moon, it would respond "You can't land on the moon. The moon is a god floating in the sky." How would you get it to realize the moon is actually a big rock? That's a great, poorly defined problem, and I don't expect AI to solve it anytime soon. With all due respect to those select few who have in fact added significantly to the world's knowledge, the vast majority of human output — including much that passes for scientific inquiry — is in fact garbage. To expect artificial intelligence, all on its own, to rise above the miasma of false starts, dead ends, conspiratorial delusions and outright deceit that constitutes much of the glorious flowering of human genius is like expecting a ventriloquist's dummy, whose master is uneducated and illiterate, not only to recite but also to write the plays of Shakespeare. There are indeed some pearls of wisdom scattered among our intellectual ruins, but LLM's have so far proven themselves largely unable to distinguish the gems from the debris. Possessing no discernment, they are incapable of making judgments about what is valuable and what is not. Human judgments are, decidedly, predicated on imperfect information and deeply flawed means of interpretation; but at least we are able to make a stab at true knowledge and, more important, question the validity of our discoveries. Thus has science slowly but surely raised us out of the primordial mire and into a technological civilization. LLM's do little more than predict which words should come next based on those that came before. They do not invent new words from whole cloth, nor produce truly original thoughts but scrambled versions of those that existed already. They are predicated on repetition and, without continuous additional input (from us), their output may appear new but they are in fact doomed to repeat themselves in perpetuity. This leads directly to AI's second irreducible flaw: it is first and foremost a product and prisoner of the Internet. The raw ingredients upon which AI feeds exist entirely in cyberspace, which is, of course, not the entirety of the universe but a decidedly limited and selective depiction of reality as we know it (so far). What sheer hubris (as the ancient Greeks would have said) to suppose that human thought and invention up to this random point in history, and collected online, is all AI requires to build a utopia for all time to come. It ought to be self-evident that life on Earth consists of a great deal more than the words and images we have managed to record to date, let alone those which happen to be available electronically. How would AI even know that anything else is out there — like a moon made of rock — unless we told it so? It has no direct experience of the world, only what we reveal to it, and what it can find in the electronic depositories of the present moment. Moreover, without electronic communication and the electric grid on which it is utterly dependent, cyberspace and all the data it contains would disappear in an instant. If you want to know how a human society might fail when all material needs are satisfied by machines, read "The Machine Stops", E.M. Forsters' cautionary tale written a century ago, about just such a world. In the story, people live in near total isolation in small apartments that resemble well-furnished prison cells. They communicate with one another through devices that are very much like computer screens wired into an electric network. They consider direct human contact so abhorrent that it is strictly avoided unless absolutely essential. Food, water, sanitation, and all other basic needs are fulfilled by the machine. But at some point the machine itself starts to break down, and there is no one alive who has any idea how to fix it. Inevitably, the system collapses and everyone dies. This may seem like a peculiar tale coming from Forster, who was not known for science fiction but rather for his well-loved novels about oblivious English men and women in the declining years of empire. But the characters in his novels do share a common trait with those in "The Machine Stops": they are unaware that the edifice upon which their pre-eminent place in the world was built is now crumbling under their feet. They are smugly secure in a society that they assume will go on forever, but which is patently unable to sustain itself. Human extinction of that variety is not what is typically contemplated by those who worry about AI, but it ought to be. The hype about AI is largely divorced from reality, because no machine can do everything that AI's creators promise it can do. During the dot.com boom of the 1990s, claims were also made about the transformative power of the Internet, which was indeed powerful and transformative, but not entirely in ways that its progenitors foresaw. And, as we all know, it has led to many unforeseen consequences, many of them — isolation and loneliness, online bullying, the spread of disinformation, cyber warfare, the undermining of democracy — detrimental to human well-being.
And this leads to AI's final and perhaps most unappreciated limitation: it is being propagated by people who are trying to make money from it. There are certainly computer scientists who have been working on AI for decades and who are genuinely motivated (perhaps naively) by a belief in AI's potential to improve our lives, just as some of those who first conceived of the Internet thought it would be a force for unalloyed good. We know how that turned out. The main problem was the profit motive: no company building LLM's is doing it out of the goodness of its heart, any more than Mark Zuckerberg built Facebook because of his abiding love for his fellow human beings and his concern for their social happiness. In the end, these products will look much like all corporate products: banal, anodyne, and as inoffensive as possible. Everyone in tech obsesses about "innovation", but this concept has been hollowed-out by corporations that are only interested in innovation insofar as it improves their profit margin and market capitalization. AI will no doubt change much about our world, but most of its benefits for mankind, just like its adverse effects, will be incidental and not intentional. The corporate capture of technology ought to inject a healthy dose of skepticism into otherwise worthy projects. The AI hype echoes through any number of other technological advances, both real and imagined. In recent years, for example, some of the denizens of Silicon Valley have become enamored of the idea that if we could somehow engineer an atmosphere on Mars, we could built a new habitat there in order to escape the future hell of Earth's increasingly fragile ecosystem. It is Plan B (or Planet B) for human survival, as it were. This notion is as utopian as it is technological, because its leading proponents are tech moguls who imagine that they will be able to leave all the problems of Earth behind and build a new, ideal society from scratch (with themselves as its leaders, of course.) It doesn't seem to occur to any of these visionary geniuses that (a) if we could accomplish the unimaginably complex (and likely impossible) task of creating a liveable biosphere on the sterile desert of Mars, then we could also fix the environment we have; and (b) that the main problem on Mars will not be figuring out how to engineer air, soil and water but how to re-engineer human nature. Because as sure as Martian night follows Martian day, all of the creativity, resourcefulness and determination that planet-hopping pioneers would bring to the task would be accompanied by all the greed, conflict and violence that prevents us from building a perfect world right here in the already breathable air of our good, green Earth. The fundamental challenge of AI is not that we will be stuck with an out-of-control technology that could destroy us, but that AI will forever be saddled with a maker that has proven very adept at destroying itself and all its own creations. October 31, 2025 Return to home page • Send an e-mail All writings on this site are copyrighted by Barry Edelson. Reprinting by permission only. |