We've forgotten what it means to be human.
After decades of conditioning ourselves to think, act, and speak like machines, we've lost touch with our essential humanity. Now, as AI mimics our mechanized behavior with frightening precision, we face an identity crisis that runs far deeper than career anxiety.
The Great Forgetting
A week ago, I made a presentation at a private leadership community on how to stay relevant in the age of AI. One of the things I mentioned is that we need to rely more on human judgment. Someone asked me what I meant by “human judgment” and how to actually use it in real life.
It was a question that gave me pause. Have we forgotten how to trust our own judgment?
This isn't a matter of intelligence or capability. We've slowly drifted away from what makes us distinctly human. I see it in corporate meetings where people automatically defer to "what the data says" without questioning the assumptions behind the analysis. I hear it when someone insists we need to "follow the process," even when the process clearly isn't working in a particular situation.
What I'm Noticing
When faced with difficult decisions, we increasingly turn to frameworks, matrices, and algorithms rather than develop our capacity for nuanced judgment.
Organizations spend millions on "change management" processes that attempt to make human adaptation as predictable and mechanical as software updates.
The moment someone raises a concern based on intuition or embodied knowledge ("something feels off about this"), they're often asked for "data to back that up."
This pattern reveals a deeper truth. We've come to believe that optimization is the highest value and uncertainty is a problem to be solved rather than a natural part of being human.
How did we arrive at this point of forgetting?
From Human to Machine
Back in 1958, the philosopher Hannah Arendt mapped human effort into three clear buckets. The map still helps us judge what AI can (and can't) replace. She called the buckets:
Labor: Repeating tasks that sustain life.
Work: Building durable things that shape the world.
Action: Beginning something new together in public, through words and deeds that reveal who we are.
For decades, our economic system rewarded those who excelled at labor and work. We became skilled at repetitive tasks, at creating deliverables, and at optimizing processes. We learned to speak in corporate jargon, to suppress emotion, to follow prescribed formulas for success.
And now AI excels at both labor and much of what Arendt called work. What remains distinctly human is action, yet this is precisely the area we've neglected to develop.
The tragedy isn't that AI might replace us. It's that we've spent so long replacing our humanity with machine-like behavior that we barely remember what being human feels like.
Voices from the Mechanized World
Last summer, I asked my LinkedIn community to share the language we use to describe our working lives. The responses were telling:
Rat Race. Cubicle Farm. Human Capital. Wage Slavery. Corporate Prison. Soul-Sucking Job. Dead-End Job. Hamster Wheel. Golden Handcuffs. Disposable Worker. Cog in the Machine.
Those phrases are warning signs. They're symptoms of a profound alienation from our humanity. We've normalized a way of working that treats people as interchangeable parts, as resources to be optimized.
When we describe ourselves as "cogs" or our workplace as a "farm," we're not just using colorful metaphors. We're expressing a deep truth about how dehumanized our work has become.
We didn't just let this happen to us. In many ways, we eagerly participated in our own dehumanization, believing that becoming more machine-like would make us more valuable.
The Artificial Mirror
Now, AI holds up a mirror to our mechanized selves, and the reflection is unsettling. When ChatGPT can write a convincing corporate memo or marketing plan, it's not just performing a task. It's revealing how formulaic, how algorithmic, our professional expressions have become.
This raises a profound question. If a machine can so easily replicate what I do, what does that say about how I've been working? Have I been functioning as a proto-algorithm all along?
The revelation isn't that AI is remarkably human-like. It's that we've become remarkably machine-like.
This mirror effect explains the existential anxiety many professionals feel about AI. It's not just fear of replacement, but recognition of how we've hollowed out our own humanity to succeed in systems that value predictability over creativity, compliance over judgment.
Beyond the Algorithms of Self
This mechanistic thinking has crept far beyond our professional lives. Look at how we approach our personal development and wellbeing now. We've begun to think of ourselves in algorithmic terms: optimizing our sleep, quantifying our health, measuring our social connections through likes and follows.
These approaches aren't inherently wrong, but they reveal how deeply we've internalized a mechanistic view of ourselves. We treat ourselves like dashboards instead of living beings.
Yet, the richest human experiences often defy measurement. The moment of connection with a loved one. The flash of creative insight. The sense of wonder in nature. The feeling of being truly seen and understood. These experiences aren't optimizable. They arise from our messy, emotional nature.
Let's ask ourselves how we can reclaim these essentially human qualities in a world that increasingly rewards machine-like behavior.
Rediscovering Human Judgment
As I've explored the concept of Career Humanism over the past year, I've come to see judgment as central to our humanity in the workplace. But what exactly is human judgment?
It's not simply making decisions. It's weighing competing values, navigating ambiguity, and taking responsibility for choices in complex situations where there is no single "right" answer.
Our workplaces have systematically devalued this uniquely human capacity. We've replaced it with "best practices," standardized processes, and metrics that give the illusion of objectivity. We've created elaborate approval systems that distribute responsibility so widely that no one person feels accountable for outcomes.
AI can follow complex decision trees and optimize for programmed objectives. But it cannot make true judgments because it has no skin in the game, no capacity to feel the weight of responsibility, no ability to balance competing values that cannot be reconciled through calculation alone. This is human territory, yet we spent decades acting as if it were a bug to fix.
The Practice of Being Human
If we accept that reclaiming our humanity is both necessary and valuable, how do we actually do it? What concrete practices can help us recover capacities that have atrophied through disuse?
Reclaiming our humanity isn't a one-time decision but a daily practice. Drawing from ancient and modern philosophy, my research on human judgment has revealed several practices worth trying:
Trust your voice. Instead of hiding behind others' ideas or research citations, dare to speak from your own experience and insight. Your unique perspective, formed through your individual journey, is something no AI can replicate.
Embrace your body's wisdom. Our embodied nature gives us access to intuition and felt sense that pure logic can't match. Notice what your body tells you in crucial moments.
Make consequential judgments. Seek opportunities to exercise real judgment, where decisions matter, where multiple values are at stake, where there's no formula for the right answer.
Build genuine relationships. In an age of networked connections, the ability to form deep, trusting relationships becomes even more valuable.
These practices might seem simple, but they're countercultural in environments that prize data over intuition, consensus over individual voice, and transactions over relationships. They require courage and a willingness to risk being wrong, which are qualities that machines will never possess.
A Call to Human Flourishing
What if this moment isn't just a crisis but an opportunity? What if AI is forcing us to rediscover aspects of our humanity that our mechanized work culture has long suppressed?
This is where my concept of Career Humanism becomes not just a philosophical idea but an urgent practical necessity. I defined Career Humanism as:
"A holistic and human-centered approach to career development that prioritizes individual wellbeing, personal growth, and meaningful work."
I believe we're standing at the threshold of what could be a Human Renaissance, a rediscovery of our essential humanity after decades of mechanization.
Just as the original Renaissance followed the Dark Ages in Europe and launched an era of incredible progress in art, science, and human potential, this moment of profound disruption could lead to a new appreciation for what makes us uniquely human.
But this won't happen automatically. It requires us to consciously choose a different path, to value human judgment over algorithmic efficiency, to create workplaces that nurture our full humanity rather than just our productive capacity, to measure success not just in output but in meaning.
A Question Worth Considering
When was the last time you trusted your judgment over data, your intuition over process, or your humanity over efficiency?
If you struggle to remember, perhaps that's the strongest evidence that we need to reclaim what makes us human before we forget completely.
Hi Alina, sorry to be late commenting, but it's been a busy week of labor - oh no! ... of work - oh no! ... of action!! lol
I must say, in the last days so much news and conversations are taking place. And though it is because of AI it is symptomatic they all converge to the human side, and specially human's cognitive action.
For sometime I have been putting aside to revisit Neil Postman, but I delved in his main thesis and his critical position on technology in the last weeks. I even shared his biography to my readers. In Brazil he is most known in the Educational realms, and less so on the aspects of Media & Society where he really hit hard.
For my delight as I least expected, Tristan from the CHT brought a video with two guests to talk on Neil Postman's ideas! That was in March and only today came to my attention. So I keep on tying up the edges.
I see in it all a real thread as a wave of ideas and perceptions is leading us to a higher place where we can view the future more clearly. I'll get back to action now.