Don’t Look Up, Look Within: How to Address the True Existential Threat of AI?

Pavel Luksha
7 min readJan 29, 2024

Are we effectively dead?

I recently watched the movie “Don’t Look Up: Documentary”. As the title suggests, the movie describes an real & urgent existential risk. For more than quarter an hour, dozens of scientists and politicians rant about the danger of AI to the humanity. The tone of the movie is exemplified by Elizier Yudkowsky who says “we are effectively dead because the AI is already here”. Yet the movie leaves us with a lingering question: how exactly are these sentient silicon overlords going to kill us?

The AI “doomsday scenarios” can be roughly clustered into three categories:

  1. Direct manipulation of matter.
    An example of these ultra hypothetical scenarios is Nanobot Apocalypse spelled in Vernor Vinge’s A Fire Over the Deep or 1990s space opera LEXX. This scenario suggests that, some time in the distant future, the AI will assist humanity in engineering nanobots, microscopic machines capable of manipulating matter at the atomic level. Then, akin to the infamous Grey Goo scenario, it will take control of its creation, overpowering human race and all life on Earth, either putting it under its own control or completely destroying it. While thrilling, such a plot is currently beyond our technological grasp, nanobots still remaining a purely theoretical concept, on par with interstellar travel or Dyson spheres. The scenario also evidently assigns supernatural demonic powers to AI, intentionally guiding us into this technological trap — a risk perhaps far less realistic than a destruction of the Earth by an asteroid (see “Don’t Look Up” fictional version) or an explosion of the Yellowstone super-volcano.
    [A more plausible, though still hypothetical scenario in this category is when AI manipulates the biotechnological research to produce a highly lethal & contagious virus to wipe out humankind.]
  2. Manipulation of matter through AI-controled machinery.
    An example of the second group of scenarios is Smart City Skynet. It considers the implication of the anticipated widespread AI control over smart city infrastructure, energy, transportation, as well as over military robotics. Potentially, a Skynet-like AI could turn there IoT connected devices into killer machines hunting down and wiping out all humanity. Theoretically, such a scenario could become a risk at some point in the future — and while the possible “rebellion of machines” is perhaps at least 50–100 years ahead, its probability is miniscule, since control mechanisms for smart infrastructure would likely be tightly integrated and rigorously tested, exactly with this scenario in mind.
  3. Manipulation of information.
    The most likely, and arguably the most insidious, threat lies in AI’s potential to weaponize information and undermine the socio-emotional fabric of our society. In Infowarfare Slow Poison scenario, AI-driven “infowars” could incite social unrest, erode trust in institutions, and cripple global cooperation — the slow, silent erosion of our shared reality from within. These maniulations can involve several options:
  • A tool for sowing discord. We have already witnessed deepfakes sowing distrust and disinformation campaigns manipulating public opinion. AI could take it to a completely new level, crafting hyper-personalized misinformation, tailored to exploit individual biases and exacerbate societal divisions. The erosion of shared truth, the polarization of “us vs. them,” can crumble the very foundations of cooperation and collective action.
  • A tool for oppression. AI trained on biased data can perpetuate or even amplify systemic inequalities. Hiring algorithms favoring certain demographics, facial recognition programs misidentifying minorities, or predictive policing models reinforcing existing power structures — these are no longer sci-fi fantasies but a new “Big Brother” reality slowly wrapping us. The AI as an instrument of oppression will have devastating impact on individuals and communities — undermining the long-term resilience of human populations.
  • A tool for moral degradation. AI-powered algorithms for social media & entertainment platforms are designed to keep us glued to screens. These algorithms prioritize engagement over truth, curate feeds to inflate egos, and create echo chambers for our pre-existing beliefs. If people are exposed to them long enough, they can breed a population disconnected from empathy and critical thinking. This can weaken social cohesion and hinders our ability to address complex challenges that require collective action.

The Mirror Reflecting Back: AI as a Cultural Rorschach Test

The idea of AI as the biggest existential risk is voiced by thousands of experts, and perhaps should not be dismissed lightly. But in my opinion, the whole line of arguments misses an important point. In the majority of AI doomsday conversations, experts tend to think of AI as the “other”, something disconnected from us, a new non-human entity “awakening” to its non-human purposes. The rise of Artificial General Intelligence is unequivocally, and perhaps subconsciously, equated with the rise of non-human Artificial Super-Consciousness: a sentient and self-aware entity that will “want” (or might already try) to shape all humans to its ends or destroy us if we don’t obey.

This pervading idea of the Western science can most likely be traced back the 17th century Descartes’ conflation of intelligence (cognition) with consciousness: Cogito Ergo Sum. Our consciousness, according to this perspective, is nothing but a “self-aware intelligence”. The non-Western perspectives, however, see the intelligence as just one of several faculties of consciousness. Many people familiar with Vedic, Taoist, or Buddhist contemplative practices have a first-hand experience of “no-mindness” — pointing to the very essence of what it means to be alive, aware, and connected. Neural network simulations will help us gain deeper insights into how the “machine” of our mind works, but they will most likely remain helpless in grasping the processes of the consciousness.

AI, even increasingly powerful, is a mirror of our civilization that finally allows us to see our collective mind. It reflects values and biases of our civilization — and fearing AI as an “other” is akin to fearing our own shadow. Like a child’s reflection in a warped mirror, AI can amplify our existing societal fissures — racism, sexism, xenophobia, hatred.

To truly address the existential threat, we must look not outward, but inward. Our culture holds the enormous destructive potential shaped by millennia of ethnic and religious conflicts where tribes and civilizations were wiped from the face of the Earth. Rather than projecting our fears upon a creature of our own making, we need to acknowledge that we are staring in our own souls. Cultivating our potential for love and curbing our destructive tendencies should become our concern, not curbing the development of AI. We must dismantle the harmful narratives embedded in our cultural “algorithms”, confront our own biases, and cultivate relational practices of empathy and dialogue.

AI: at the Cross-roads Between the Bad and the Good

There are many ways to curtail negative tendencies in the development of AI that invite researchers, social entrepreneurs and political activists to take immediate action:

  • Similar to Social Media Reform movements, it is time to shape action groups that demand transparency and accountability in AI development and deployment. Algorithms should be open to scrutiny, their creators (including private corporations and public institutions) should be held responsible for their outputs.
  • We need to help build up or support organizations fighting for digital equity and algorithmic justice. These groups combat harmful biases and advocate for responsible AI development, as well as “ownership” over our personal data and “right to choose”.
  • Critical thinking and media literacy are two of the most important “21st century skills”. Citizens of all ages need to learn to identify misinformation and disinformation, and don’t share content uncritically.
  • Most importantly, we need to set foundations for more harmonious future by cultivating the culture of empathy and dialogue, engaging in meaningful conversations with other people across lines of difference, challenging our own biases, and building bridges of compassion. Peace and social harmony is the underappreciated “foundation of everything”, and it can only be created by all of us.

But what is more important is that we need to reshape the narrative around AI. It is time to begin recognizing how AI can become a “tool for collective emancipation”, as my good friend George Por puts it. For example, today neural networks of social media platforms are largely designed to cater addictive reels, divisive posts, and highly-irrelevant advertisements. What if instead they were designed to help us discover things we want to learn and grow into? What if they assisted us in our journey through the Zone of Proximal Development, like magic mirrors where we can see a version of our better selves? With more research and activist groups entering the field of AI-for-good applications, we may hope that AI indeed will become a vehicle to amplify the collective capabilities of humanity — rather than destroy its intellectual, cultural, and moral foundation.

We need to remember: AI is neither a demon nor a savior. It is a powerful tool, but the choice of ways to deploy this tool is ours. By confronting our own shadows reflected in the AI mirror, we can forge a future where technology empowers, not endangers, our collective humanity. We should not “look up” to fear it as an existential risk - but within, to build a world where humans and more-than-human world can coexist in harmony, assisted by technologies in service of Gaia.

--

--

Pavel Luksha

Thinker, change catalyst, facilitator. Founder of Global Education Futures, co-founder of The Weaving Lab and Living Cities Earth, and the fellow of WAAS