The Last Human Job: Life in the Age of AI Supremacy
For the past few decades, we’ve watched technology accelerate at a breathtaking pace. Voice assistants, smart homes, and driverless cars are no longer the stuff of science fiction; they are woven into the fabric of our lives. There is no doubt that technology simplifies our world and makes it more comfortable. In some areas, machines already perform tasks faster and better than humans. This rapid progress leads us to a profound set of questions: When will this technological wave crest, and most importantly, will we be swept away by it?
A Changing World of Work
The American publicist Kevin Drum has suggested that within the next 40 years, machines could replace tens of millions of people in their jobs. This shift will likely affect salespeople, assembly line workers, and security guards first. From an employer's perspective, artificial intelligence appears to be a more efficient worker. It doesn’t make mistakes due to inattention, it never needs to be motivated, it doesn’t suffer from burnout, and it will never ask for a raise.
Of course, AI is still far from perfect, but the trend toward automation is undeniable. Very soon, we will all have to confront the question of what we will do when machines take over our current roles. Artificial intelligence is set to radically change not just the labor market, but our culture as a whole. It is only a matter of time.
The core principle of AI today is the replication of human cognitive functions—like searching for information or performing mathematical calculations—but with a volume, speed, and quality that we cannot match. Our brains are still far more complex than any algorithm, but most modern philosophers and futurologists believe this gap is closing.
The futurist and technical director at Google, Raymond Kurzweil, is a chief ideologist of this coming era. He predicts that by 2045, artificial intelligence will not only catch up with and surpass the human mind but may even transcend the limits of its physical form. Kurzweil's idea is built on the observation first described by Intel co-founder Gordon Moore: that the power of computers has been growing exponentially. If this trend holds, a superintelligence will soon emerge, one capable of changing not only our society but perhaps even the fundamental laws of nature. This idea is so immense that for many, it evokes a deep-seated fear rather than joyful anticipation.
The Specter of the Singularity
At the point of this "singularity"—a moment of uncontrollable technological growth—artificial intelligence could escape human control and displace us from the pinnacle of evolution. For millennia, we have been the dominant species, controlling the lives of other living beings. Now, for the first time, our position at the top is shaken.
Artificial intelligence is, by its nature, rational. What if it looks at humanity and decides we are not the most perfect species, or worse, that we are a harmful one? This fear is a common theme in science fiction, but it stems from a very real philosophical debate. There are two main ways of looking at this future: with techno-optimism or techno-pessimism.
Techno-optimists believe that AI will lack our most destructive flaws—the desire for power, cruelty, and self-assertion at the expense of others. They argue it could organize a more perfect world than we ever could. Techno-pessimists claim the opposite: that any intelligence we create will inevitably bear our imprint and will therefore be doomed to repeat our mistakes. We cannot know for sure what an artificial mind will be like.
The Problem of Consciousness
The creation of artificial intelligence is not just a technological problem; it is a philosophical one. For centuries, thinkers have wrestled with questions about the mind. Aristotle believed the mind was a special kind of matter. Descartes famously divided the world into thinking things and extended (physical) things. But how can we tell the difference? According to Descartes, a thinking thing must possess self-awareness.
A person can tell you if they are conscious. But how can we know if a machine or an animal has self-awareness? The psychologist Gordon Gallup developed the mirror test to explore this. Scientists would discreetly place a mark on an animal and show it a mirror. If the animal touched the mark on its own body, it suggested an understanding that it was seeing itself, not another creature. Similar tests have been attempted with AI systems, but the question of genuine self-awareness remains.
Self-awareness seems to be built on a foundation of sensory experience. Feelings give us a subjective inner life, an experience that is impossible to perfectly convey to another person. The philosopher Clarence Lewis likely called this experience "qualia." Think about trying to explain the color red to someone who has been blind since birth. For some, red evokes warmth and light; for others, anger and aggression. Your personal sensory experience is uniquely your own.
For a machine to be truly like a human brain, it would need to draw on its own experienced emotions to write a sonnet or compose a story, to understand what it has written, and to be aware of its own mental states. This concept of qualia is considered one of the most uniquely human qualities and, therefore, one of the most difficult for a machine to reproduce.
Another key difference is our physical body. It may seem like a limitation that machines should overcome, but the body—including the brain—is the source of our sensory experience. This leads to what the Australian philosopher David Chalmers calls the "hard problem of consciousness": why and how does the brain, a physical organ, generate the non-physical experience of consciousness? Why doesn't the stomach or the kidney do this? Until we have an answer to that question, the creation of a strong, human-like artificial intelligence remains out of reach.
Life After Labor
For now, the fields that rely on emotional intelligence will likely remain human domains. A machine can compare symptoms against a vast medical database, but can it show empathy to calm a patient before surgery? Can a robot make the human choice to continue trying to save a person even when its algorithms say it is futile?
If it turns out that humans possess no exceptional quality that is inaccessible to a machine, then robots will eventually replace us across the labor market. So what will people do in a world dominated by AI?
The writer Kevin Kelly has suggested that our economic existence could be secured by a universal basic income, a system already being tested in some countries. With their basic needs met by machines, people will have to rethink the very meaning of life. The ancient philosopher Plato once said that time and freedom are necessary for philosophizing. When machines perform the bulk of our work, we will be confronted with unprecedented freedom and the ultimate question: What do we want to do with our lives, and why do we live at all?
As Friedrich Engels wrote, labor made man from apes. Perhaps, in a strange twist, the absence of labor will not return us to an animal state but will instead help us become something more—a kind of superhuman. So, are you a pessimist or an optimist? Do you believe machines will enslave us, or will they finally set us free?
References
-
Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking Penguin.
This book provides the foundation for the article's discussion of exponential technological growth and the concept of the Singularity. Kurzweil argues that by 2045, the merging of human and machine intelligence will create a reality-altering superintelligence. The text directly supports the predictions about AI surpassing human intelligence and the potential for it to move beyond its physical hardware.
-
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
This work is essential for understanding the philosophical hurdles in creating true AI. Chalmers famously outlines the "easy problems" of consciousness (how the brain processes information) and the "hard problem" (why we have subjective experience, or qualia). The reference supports the article's points on why consciousness, sensory experience, and qualia represent a profound barrier for machine replication (see especially Chapters 1 and 3).
-
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
This book elaborates on the techno-pessimistic viewpoint discussed in the article. Bostrom methodically explores the potential risks of creating a superintelligent AI, including the "control problem"—the challenge of ensuring that a vastly more intelligent being remains aligned with human values. It provides a rigorous, academic basis for the fears of AI escaping our control and acting in ways that could be harmful to humanity.