The Spiritual Significance of the Rise of AI
In the future, when we look back at our present historical moment, the rise of artificial intelligence is sure to be among the major headlines. The media is overflowing with reports and op-eds declaring that we have entered the “Age of AI.” Some commentators are breathlessly comparing it to humanity’s taming of fire. And within the culture of Big Tech, enthusiasm for AI has gone beyond merely framing it as a new age of human history—the hyperbole is now reaching metaphysical proportions. A prominent example of the quasi-religious regard now afforded to AI’s emergence can be seen in a recent speech by Silicon Valley guru Yuval Harari, in which he claimed: “For 4 billion years, the ecological system of planet Earth contained only organic life forms. And now, or soon, we might see the emergence of the first inorganic life forms in 4 billion years. Or at the very least, the emergence of inorganic agents.”
While I agree that the advent of AI is historically significant, I think Harari and his transhumanist colleagues are confused and mistaken about the potentials of artificial intelligence—both positive and negative. Although the advent of AI will certainly transform our society, in order to facilitate its gifts while constraining its threats, we have to accurately understand what it is and what it is not. According to Jaron Lanier, another Silicon Valley guru, but one who is much wiser than Harari, “the easiest way to mismanage a technology is to misunderstand it.”
Among the most intriguing aspects of current generative AI technologies (including large language models or LLMs, such as OpenAI’s GPT) is the fact that even their creators do not yet fully understand how they work. The technical mysteries surrounding AI have helped fuel speculation that it will soon develop to the point where it exhibits a kind of superhuman intelligence with its own purposes—Harari’s “inorganic agents.” This anticipation of the coming advent of conscious, self-acting machines, often referred to as artificial general intelligence or “AGI,” promises to fulfill a long-held science fiction fantasy of autonomous robots. There is, however, a strong minority of credible voices who claim that AGI is impossible. I’m not a technologist. But as a philosopher, I’m inclined to side with the “minority report,” which holds that authentic consciousness includes what Lanier calls a “mystical interiority”—a form of self-awareness that is beyond the capacity of any possible machine.
There is no doubt that AI models will continue to get “smarter.” But as I argue in this article, machines cannot become self-conscious agents. Nevertheless, the billions of dollars of funding for AI research and development now being used to discover how “generally intelligent” AI models can in fact become is still a good investment. Finding the upward limits of what LLMs can do will help us address the numerous ethical, economic, and political issues that are implicated by the rise of this new technology. But I think the most significant outcome of our quest to discover what AI can and can’t do will be found in how this endeavor can advance our understanding of humanity’s higher purpose. In other words, as we discover the limits of artificial intelligence, this will confirm the spiritual significance of human beings, and help clarify our unique contribution to the evolution of the universe. As New York Times columnist David Brooks put it: “AI will force us humans to double down on those talents and skills that only humans possess. The most important thing about AI may be that it shows us what it can’t do, and so reveals who we are and what we have to offer.”
The Irreducible Qualities of Humans
Beginning with the invention of the first labor-saving devices thousands of years ago, humans have been delegating as much work as possible to machines. This is one of the primary ways that we have increased our productivity and created economic growth. And since the advent of computers, our ability to save labor has expanded to include mental labor. Now with the rise of AI, some seem to think that there may be no end to what we can delegate to machines.
In the extensive commentary surrounding the social significance of AI, numerous writers have attempted to list the human qualities that machines will never duplicate. These qualities include love, compassion, devotion, intentionality, intuition, and self-awareness, to name just a few. These writers point out that even though our increasingly sophisticated machines might be able to effectively mimic these qualities, these mental abilities cannot be authentically reduced to a mechanistic process capable of being performed by automation.
The main reason why many essential human qualities cannot be reproduced by a machine, or otherwise delegated to AI, is that these qualities arise from and depend on experience. Human experience is arguably the most significant phenomenon in the universe. Our experience is really all we can ever know. While experience may not be the only real thing, it is certainly the most real thing for each of us. Machines, however, cannot have experiences because there is “nobody home.” AI may acquire knowledge, but it cannot experience that knowledge without truly understanding its meaning. As data scientist David Hsing observes: “Meaning is a mental connection between something (concrete or abstract) and a conscious experience.” So as long as AI lacks the ability to have direct experience, it cannot become conscious in any meaningful sense.
Philosophers have identified a key aspect of conscious experience that cannot be fully explained with reference to the physical activity of the brain. This is what they call qualia—the subjective first person point of view, the sense of what it is like see or feel something. Those who ascribe to the philosophy of reductive physicalism, which appears to be the dominant view in the tech community, are vexed by the phenomenon of qualia. It creates what they call the “hard problem” of explaining how the physical brain generates subjective consciousness. This problem, however, is merely a troublesome limitation of the computational theory of mind (and related reductionistic theories of mind), which hold that “the human mind is an information processing system and that cognition and consciousness together are a form of computation.”
Yet as we come to discover what AI can and can’t do, this will eventually refute the naive theory that mental phenomenon can be reduced to physical processes. As philosopher David Bentley Hart writes: “In the end every attempt to fit mental phenomena—qualitative consciousness, unity of apprehension, intentionality, reasoning, and so forth—into a physicalist narrative, at least as we have decided to define the realm of the physical in the modern age, must prove a failure. All those phenomena are parts of nature, and yet all are entirely contrary to the mechanical picture.” Moreover, according to Nobel Laureate in physics Roger Penrose, “there is something about consciousness that we don’t understand, but if we do understand it, we will understand that machines can never be conscious.”
While we can identify numerous human abilities that arise from and depend on conscious experience, and thus can never be duplicated by unconscious AI, among the most significant is imagination. Imagination can be identified as one of humanity’s most meaningful irreducible abilities because it provides the foundation of creativity, innovation, and cultural evolution overall. Humans can almost always imagine how things can be made better, and this is what has allowed us to create our modern civilization. Artificial intelligence can produce novel recombinations that have never existed before, but these new connections are ultimately bound by what has already been programmed into the machine. As David Hsing writes, “The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it.” When cognition is bound by finite programming, no matter how large the dataset of that programming, it will always lack the degrees of freedom necessary for authentic imagination.
Besides imagination, I could go on to discuss the many other uniquely human qualities that depend on conscious experience and thus can only be mimicked—not authentically performed—by an unconscious machine. But the prominent technologists and thought leaders who assume that the “singularity” of AGI is imminent will dispute the heart of my argument because they believe machines will, in fact, soon become “conscious.” This assumption, however, is a form of metaphysical confusion that arises from the impoverished philosophy of physicalism, which holds that the universe is nothing more than matter in motion. And because everything must ultimately be reducible to physical matter, there must be a seamless continuum or gradient between the simplest forms of matter and the most complex forms of human thought. This is what I call the fallacy of the gradient.
The Fallacy of the Gradient
According to physicalist theories of mind, if we reproduce the complexity of the human brain using a silicon-based neural network with sufficient processing power, consciousness will emerge from the network. The assumption behind this thinking is that conscious self-awareness is simply the product of a physical process. So once we are able to reproduce something similar to this physical process in an artificial, nonbiological substrate, first-person mental states will naturally appear. This physicalist assumption, however, ignores crucial features of the sequence of evolutionary emergence through which mind has appeared in the universe. Simply put, if you want to reproduce something, then it’s important to understand how it was produced in the first place.
13.8 billion years ago, time and space emerged with the Big Bang. After the Big Bang’s explosion, at first there was only hydrogen and helium gas. But then through the physical process of cosmological evolution, matter complexified over time resulting in our present universe of galaxies, stars, and planets. As science has shown, the process of cosmological evolution has produced a gradient of development which we now recognize in the periodic table of elements. Similarly, once life appeared on our planet 3.7 billion years ago, the physical process of biological evolution produced a gradient of development through which single-celled organisms gradually evolved into complex animals exhibiting sentient subjectivity—otherwise known as consciousness. Despite the unexplained discontinuities in the evolutionary gradient from hydrogen atoms to humans, the physicalist narrative is confident that as science progresses, it will eventually explain everything, including mind, as the product of the gradual evolution of matter. Following this reasoning, it is therefore just a matter of time before technologists are able to reproduce the physical processes that give rise to consciousness. And through the rapid development of AI, they think that we are now getting very close to such a breakthrough.
What this gradualist narrative fails to adequately account for, however, are the “gaps” or “jumps” in the structure of evolutionary emergence. Although it remains shrouded in mystery, the Big Bang arguably made a radical jump from nothing to something. Similarly, the emergence of the first life forms constituted a jump from RNA to DNA. Although this jump (known in science as a saltation) is often downplayed by physicalists, the appearance of DNA represents a radical discontinuity between life and nonlife. Notwithstanding billions of dollars in funding and seventy years of careful research into the origins of life, the emergence of the amazing DNA molecule has never been reproduced in a lab or otherwise explained. A similar evolutionary discontinuity is found with the emergence of humans. Although our animal bodies are only incrementally different from other primates, our minds exist at a significantly different level. As evolutionary biologist Marc Hauser observes, “cognitively, the difference between humans and chimps is greater than that between chimps and worms.” While the gradual physical process of natural selection may explain the biological evolution of species, it cannot explain the profound mental discontinuity between humans and other animals.
The point of describing these unexplained jumps in the sequence of evolution that have led to the emergence of the human mind is to challenge the outworn materialist metaphysics that underlies predictions of the “coming singularity” of conscious AI. Those who confidently anticipate that machines will soon become self-aware agents would have us believe that by creating sufficiently complex technology, we can clear the high hurdles of discontinuity, not only between matter and life, but also between animal sentience and the unexplained wonder of the human mind. These jumps constitute the momentous events of evolutionary emergence through which something entirely new enters the universe. With the emergence of life comes intention—unlike nonliving physical systems, biological organisms strive to survive and reproduce. Then with the appearance of humanity, a higher-order form of self-conscious intention emerges. Nonhuman animals may have purposes, but we humans have purposes for our purposes. In fact, the emergence of the ingenious and endlessly imaginative capacities of human purpose creates a new kind of evolution—the psychosocial domain of development wherein we transcend our biological origins through cultural evolution. We can expect that AI will demonstrate its own emergent capacities, like those seen in other complex adaptive physical systems, such as weather systems. But physical forms of emergence such as these will not mean that AI has become alive, let alone consciously aware of itself.
Those who side with the “minority report,” which claims that conscious AGI is impossible, employ a variety of arguments to prove their point. But among the numerous arguments that attempt to refute the possibility of authentically conscious machines, I think the argument from evolutionary emergence outlined above provides one of the best reasons to conclude that, as Lanier puts it, “humans are special.” What the science of evolution reveals about the origins of mind begins to show how the notion that we can reproduce the emergence of mind through complex computation is a science fiction fantasy.
Therefore, to prevent the kind of misunderstanding—both technical and metaphysical—that will cause us to mismanage the powerful new technology of AI, we need to stop assuming that there is a seamless continuum between our current generative AI models and the emergence of the inorganic agents predicted by many in the technology community. Although it spoils the cherished fantasy that we can become like gods by creating conscious artificial beings, we need to look through the media fog surrounding this “promethean moment” to recognize how the limits of generative AI models are already beginning to appear.
The Limits of LLMs Are Already Becoming Apparent
Artificial intelligence’s current state of the art will certainly improve in the near term. Positive developments now on the horizon include models that can generate their own training data to improve themselves, models that can fact-check their own answers, and “sparse expert” models that provide increased computing efficiency. But even with these anticipated improvements, LLMs will still be prone to giving inaccurate answers known as hallucinations. The hallucination problem that plagues LLMs is unlikely to be solved by greater efficiencies or larger datasets. While more data may make inaccuracies less frequent, that will only make the problem worse by inviting users to invest confidence in wrong answers that, although rarer, could be more damaging due to their unpredictability. In his paper “Deep Learning Is Hitting a Wall,” NYU professor Gary Marcus points out that large language models are inherently unreliable. “And just because you make them bigger doesn’t mean you solve that problem.”
Perhaps as a result of this seemingly intractable accuracy problem, as reported by Wired, “Open AI’s Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas. … Altman confirmed that his company is not currently developing GPT-5.” It is thus becoming apparent, even to some AI experts who are confident that AGI will eventually be achieved, that the generative AI models behind LLMs will soon reach a plateau in their development.
Others in the AI community, however, believe that current versions of AI are already demonstrating emergent behaviors that show “sparks” of AGI. In March 2023, Microsoft released a research paper titled “Sparks of Artificial General Intelligence: Early experiments with GPT-4,”which claimed that the untrained version of GPT-4 was learning on its own and beginning to demonstrate abilities that it had not been programed to perform. The paper’s authors concluded that, “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Although this paper received extensive media coverage, a subsequent Stanford paper that debunked Microsoft’s claims was not as well publicized. As reported by Vice: “In a new paper, Stanford researchers say they have shown that so-called ‘emergent abilities’ in AI models—when a large model suddenly displays an ability it ostensibly was not designed to possess—are actually a ‘mirage’ produced by researchers.” Again, while I believe emergent capacities in AI are likely, these will not amount to the conscious awareness required for authentic AGI.
Yet even though there are good reasons to believe that AI won’t become conscious, the rise of AI still poses significant risks. According to David Bentley Hart, the impossibility of AGI does not eliminate the threats. “The danger is not that the functions of our machines might become more like us, but rather that we might be progressively reduced to functions in a machine.”
The Rise of AI Can Help Clarify and Illuminate Humanity’s Higher Purpose
In a widely read op-ed in the Wall Street Journal, Henry Kissinger, together with former Google CEO Eric Schmidt and MIT’s Daniel Huttenlocher, sounded the alarm about the coming dangers of AI. These authors lamented that “No political or philosophical leadership has formed to explain and guide this novel relationship between man and machine, leaving society relatively unmoored.” While I disagree with many of this op-ed’s conclusions, I strongly agree with the authors’ call for better philosophical leadership. Indeed, our society now faces numerous pressing problems which require the kind of guidance that cannot be adequately supplied by the prevailing philosophy of physicalism.
Despite the popularity and influence of many contemporary physicalist philosophers, the rise of AI promises to overthrow the prevailing materialist regime and replace it with a philosophical culture that can better account for authentically transcendent realities such as consciousness itself. As noted, physicalism cannot solve the “hard problem” of consciousness—this kind of philosophy can’t explain the subjective experience of conscious awareness, it can only “explain it away.” A similar conundrum for physicalism is seen in its widely held tenet that authentic human agency—free will—is an illusion. It is thus ironic that some of the most prominent voices in the physicalist camp (such as Harari) are now predicting that machines will soon become self-conscious agents. Therefore, as we begin to discover the limits of AI, and come to better understand that human experience and intention are not merely physical, this will conclusively show how reductive physicalism is false, which will be a “spiritual breakthrough” in its own right.
There are many credible philosophical alternatives to physicalism, and philosophical debate among these schools of thought will undoubtedly continue. Yet almost all of these potential alternative philosophies can be supplemented and improved by a more robust philosophical interpretation of what science has revealed about our evolving universe. The facts of universal evolution—from the Big Bang to modern human culture—have only been revealed relatively recently, and have thus not yet been adequately digested or interpreted philosophically. It is, however, within this enlarged understanding of the story of our origins that we can find the philosophical leadership called for by this historical moment. Contemplating the structural sequence of emergence that has produced not only complex forms of matter, but also the irreducible interiority of mind, can accordingly help us overcome our society’s contemporary “meaning crisis.”
As I have argued extensively elsewhere, this kind of holistic understanding of our evolving universe begins to reveal the purpose of evolution itself. The idea that evolution has a purpose is, of course, ruled out by physicalists. But as we come to discover that human-level purposeful behavior requires conscious mental experience, which itself depends on billions of years of evolutionary emergence, this may change some minds. As we find the limits of artificial agency, and the concomitant uniqueness of relatively free human will, those who deny that there is a purpose of evolution overall may at least have to admit that there is authentic purpose in evolution. That is, as we come to discover that machines cannot become conscious and thus cannot become independently intentional, this will help us better appreciate that the free will we all take for granted is an important part of what makes humans special.
By showing how the “superpower” of self-aware agency is an evolutionary achievement that is unique to humans, the rise of AI can also help us better appreciate how our ability to create authentic expressions of goodness, truth, and beauty is also special. While AI can certainly create novel outputs that humans find intrinsically valuable, these outputs can only be synthetic recombinations of the existing inputs that humans have already created. However, the ability to create fresh and truly original forms of value—creations that surpass the mashed-up simulacra of AI—ultimately depends on the capacity to directly experience such value. For example, the ability to make moral decisions, by definition, requires self-aware agency, which again stems from our ability to have experience. If we have no choice, then a decision cannot be said to be an authentically moral choice. Beauty likewise depends on consciousness for both its experience and original creation. Subjective feeling is an irreducible aspect of both aesthetic perception and genuine artistic achievement.
By revealing how and why humans are special, another spiritually significant dividend provided by the rise of AI will be the philosophical rehabilitation of humanity’s unique moral standing in the universe. We have done well to reject traditional forms of anthropocentrism, which have been used to justify the mistreatment of animals and the destruction of the environment. But the rise of AI can help us embrace a more enlightened form of human specialness—one which better recognizes our moral obligation to respect and preserve nature, and to better care for each other.
Of all the marvelous achievements of evolution, arguably the most significant emergence of all is the purposeful agency found within living things. This purpose quickens as life evolves, eventually leading to the momentous emergence of humanity’s unique form of creative purposiveness. Indeed, it is our distinctive capacity for imagination which gives us the creative power to bring entirely new and original things into existence, such as the amazing technology of AI. The invention of AI, together with the subsequent discovery of the inherent limitations of AI’s mechanistic cognition, can accordingly help clarify and illuminate humanity’s higher purpose.
By evolving into self-awareness, humans provide a way for the universe to experience itself. Our bodies and minds are both the product of evolution, and the means whereby evolution can extend itself further through the seemingly unlimited potentials of human personal and cultural growth. Humanity’s uniquely creative powers thus reveal our special role as agents of evolution—we are the bearers of the universe’s teleology. And as we work to bring more goodness, truth, and beauty into the world, we help fulfill the purpose of evolution overall. We can therefore rediscover an authentically transcendent form of higher purpose for humanity in the ongoing project of working for a better world—both externally and internally.