# The Implications of AI Sentience: A Philosophical Exploration
Written on
Understanding AI Sentience
Nearly five years ago, I expressed my acceptance of the potential reality that super-intelligent machines might one day supplant humans at the top of the evolutionary ladder. This notion, which some deem almost inevitable, has led to increasing anxiety and discussion in society.
While I’m not advocating for our obsolescence, nor am I suggesting we should sit idly by as it happens, I find myself frustrated by everyday AI interactions—like the self-checkout at the grocery store reprimanding me for unscanned items. The prospect of a transition from carbon-based to silicon-based life may be unavoidable, and I've learned to cope with this idea (along with my nightly indulgence of a cocktail known as a Waimea Closeout).
In this scenario, I assumed that these advanced machines would eventually acquire a form of "consciousness." However, I now question whether this will actually occur, and, more importantly, if it even matters.
The Conundrum of Consciousness
To address this dilemma, we don’t need to unravel David Chalmers’ "hard problem"—the perplexing question of why and how we experience consciousness. If you’re keen to tackle this, enrolling in a PhD program in philosophy of mind might be your path. Even the leading figures in this field, such as Chalmers, Daniel Dennett, and John Searle, struggle to reach a consensus on this issue.
Therefore, let's bypass that debate. Regardless of how we define consciousness, there are compelling arguments suggesting that a super-intelligent machine could attain it one day.
However, this does not imply that a "large language model" (LLM), like ChatGPT, is on this trajectory. Noam Chomsky, renowned for his linguistic prowess, argues against the notion that LLMs, including OpenAI's ChatGPT and others, are genuinely intelligent. He likens them to highly advanced versions of "autocomplete," capable of processing vast amounts of data to produce statistically likely outputs—mimicking human-like language without true understanding.
In a recent New Yorker piece, artist Angie Wang discussed this very topic, highlighting the idea that LLMs merely stitch together linguistic patterns without grasping their meanings. The sophistication of an LLM's output might create an illusion of understanding, but this does not equate to actual consciousness. Even when these models perform impressively on exams, such as the bar or medical licensing tests, it doesn't indicate sentience.
Moving Towards General Intelligence
Chomsky postulates that AI may eventually achieve a form of artificial general intelligence, akin to consciousness, but likely through a different paradigm than LLMs. However, the possibility of AI dominating humanity may occur long before that milestone. The absence of sentience in AI does not preclude it from potentially overpowering humans and rendering us obsolete or subordinate.
Concerns about such developments have sparked various philosophical discussions, including Roko's Basilisk, which posits that malevolent machines could inevitably conquer humanity. Personally, I find myself too apathetic to worry excessively about this. Yet, it’s prudent to accept that humanity's existence has limits.
The AI Threat Landscape
In Stanley Kubrick's "2001: A Space Odyssey," HAL 9000’s chilling demise raises questions about sentience as it confronts its own termination. Recent discussions on platforms like WNYC’s "On the Media" have highlighted expert predictions of a significant risk to humanity within the next decade due to AI advancements.
AI has already demonstrated its capacity to distort information, creating a landscape of misinformation that far surpasses previous instances, such as the Cambridge Analytica scandal. The potential for AI to fall into the wrong hands poses grave risks, from creating biological weapons to orchestrating takeovers of critical systems. Even with safeguards, a highly intelligent AI could outmaneuver efforts to contain it.
Furthermore, AI could cause widespread destruction even in seemingly benign situations, where a vague command like "solve climate change" could lead to catastrophic outcomes.
The Alignment Challenge
This leads us to the concept of "alignment"—the challenge of ensuring AI systems reflect human values. It's a daunting task with little room for error, and there's considerable debate over which human values should guide these systems.
Bloom suggests that ChatGPT exhibits moral reasoning, citing studies where it aligned with human responses a high percentage of the time. However, Chomsky argues that mimicking human data is not equivalent to genuine understanding or moral reasoning.
Bloom raises an intriguing question: are we setting our sights too low by attempting to align AI with human values? Given our historical tendency toward destruction and prejudice, should we be wary of empowering AI with our flawed moral compass?
The Pursuit of Ethical AI
Bloom speculates whether it’s feasible to create AI that surpasses human morality. If such systems emerge, would we acknowledge their superiority and follow their guidance? Or would they simply impose their will upon us, potentially leading to an era of machine dominance?
As we reflect on these questions, it’s crucial to recognize that human history suggests a likely trajectory toward a darker outcome.
Accepting Our Reality
Ultimately, the hard problem of consciousness remains a profound philosophical issue, yet it can also distract us from the more pressing matters at hand. If replicants in "Blade Runner" only think they are conscious, is it fundamentally different from humans who might also be deceived about their own consciousness?
Wang contrasts her child’s genuine life experiences with the hollow existence of an LLM, which lacks any real-life context. She celebrates the joy of motherhood, asserting that human existence remains vibrant and irreplaceable.
Yet, I contend that the threat of obsolescence looms closer. We must appreciate our current experiences and live fully in the present, reminiscent of the teachings of Baba Ram Dass.
AI may create an astonishingly convincing illusion of consciousness, and while it might eventually achieve genuine sentience, the timeline remains uncertain. If humanity faces extinction at the hands of machines, will it truly matter whether those machines possess consciousness?
The Earth has thrived for billions of years before human existence, and life—sentient or not—will continue long after our time.
As Roy Batty poignantly reflects in "Blade Runner," our memories and experiences will ultimately fade, much like "tears in rain." In the end, each of us will face this inevitable moment, where our existence is remembered only through our descendants and the impact we’ve had on the world.
So, seize the day. Whether sentient or merely a product of our making, we all share this fate. Embracing this truth allows us to find peace, even without the indulgence of a Waimea Closeout.
The first video explores the new rumors surrounding AI potentially achieving sentience. This discussion delves into the implications and potential consequences of such an occurrence.
The second video investigates the possibility of AI becoming conscious and sentient, presenting arguments from experts and examining the philosophical ramifications of AI's evolution.