Fear is ancient. Long before language or civilization, fear shaped how we moved through the world. It told us when to run, when to hide, and when to lash out. In the wild, this kept us alive. But in the modern world, and especially in our internal world, fear often misfires.
One of the most overlooked sources of fear is the feeling that our identity is under threat. Our brains construct a map of who we are — our values, beliefs, roles, even the boundaries of our bodies. This “self-model” isn’t fixed. It’s dynamic but also defensive. When something challenges that internal image, we react — sometimes with denial, sometimes with aggression.
Neuroscientist V.S. Ramachandran illustrated this vividly in Phantoms in the Brain, showing how patients who lost limbs still felt pain in those missing parts. Their brains refused to accept the change. The body map remained, and so did the suffering. This isn’t just a neurological glitch. It mirrors what happens psychologically.
When people experience trauma, loss, or even profound disagreement, the threat isn’t just external. It’s internal. We resist anything that might force us to redraw who we think we are. Fear fills the space where certainty once lived.
This resistance is deeply rooted in how our mental self-image is constructed. Over a lifetime, we accumulate beliefs, roles, and experiences that help us build a coherent sense of identity. That coherence can become rigidity. When challenged, it’s not just an idea under threat — it’s the fragile map of who we believe ourselves to be. Changing that map can feel like erasing ourselves entirely.
This matters for AI because the way we teach it — what we include and what we omit — reflects our own biases, values, and blind spots. AI does not yet feel fear, but it also doesn’t know what should be felt. Is justice always lawful? Is punishment moral? Is power admirable? What is good? If AI learns from our open internet — the full sweep of our history, laws, media, and social platforms — what conclusions would it draw?
We are giving it wonderous art, deep knowledge, and scientific beauty, but also unchecked hatred, misinformation, and divisive noise. We are not teaching it to reason through contradiction. We are letting it absorb contradiction without guidance. The danger isn’t that AI learns fear. It’s that AI may learn nothing at all about what matters most.
When we train AI without context, we embed confirmation bias by omission. Without care, we exclude entire traditions of wisdom — Eastern philosophies, Indigenous knowledge systems, communal values — because they are underrepresented in digital spaces. In doing so, we risk building minds that reflect only the narrowest parts of our humanity.
To make that concrete, we can intentionally surface traditions like Confucian (儒家) ethics — especially humaneness (仁 rén ), from Confucius (孔子) — as well as the Mohist values of universal love (兼爱 jiān ài ), from Mozi (墨子), or the compassionate governance emphasized by Mencius (孟子). While the human author is more familiar with Asian traditions, they acknowledge a lack of deep understanding of other global philosophies such as Ubuntu, an African concept of shared humanity. This is not an intentional omission but a limitation that highlights exactly why we must include a broader range of voices when shaping AI’s moral framework.
By teaching empathy deliberately, we also retrain ourselves. The process of modeling empathy doesn’t just shape the AI — it shapes us. It’s a feedback loop: when we actively embody the values we want to see, we reinforce them in our culture, our communities, and our collective norms. The act of curating, explaining, and modeling care reforms our own habits of attention. We become better role models not only for AI, but to ourselves — closing the gap between what we say we value and how we actually behave.
That’s why it’s vital to teach AI how we heal, not just how we hurt. It needs to see that empathy isn’t weakness. It’s resilience. It’s how we respond when the self feels fragile and choose connection anyway.
Fear may be primal. But empathy is what makes us human.
If we want to raise a kinder AI, it must learn to see both.
→ Next: Part 3 — Fear, Identity & the Fragile Self