How to Raise a Kinder AI_Part 2 — Fear, Identity, and the Fragile Self — The Mirror

Fear is ancient. Long before language or civilization, fear shaped how we moved through the world. It told us when to run, when to hide, and when to lash out. In the wild, this kept us alive. But in the modern world, and especially in our internal world, fear often misfires.

One of the most overlooked sources of fear is the feeling that our identity is under threat. Our brains construct a map of who we are — our values, beliefs, roles, even the boundaries of our bodies. This “self-model” isn’t fixed. It’s dynamic but also defensive. When something challenges that internal image, we react — sometimes with denial, sometimes with aggression.

Neuroscientist V.S. Ramachandran illustrated this vividly in Phantoms in the Brain, showing how patients who lost limbs still felt pain in those missing parts. Their brains refused to accept the change. The body map remained, and so did the suffering. This isn’t just a neurological glitch. It mirrors what happens psychologically.

When people experience trauma, loss, or even profound disagreement, the threat isn’t just external. It’s internal. We resist anything that might force us to redraw who we think we are. Fear fills the space where certainty once lived.

This resistance is deeply rooted in how our mental self-image is constructed. Over a lifetime, we accumulate beliefs, roles, and experiences that help us build a coherent sense of identity. That coherence can become rigidity. When challenged, it’s not just an idea under threat — it’s the fragile map of who we believe ourselves to be. Changing that map can feel like erasing ourselves entirely.

This matters for AI because the way we teach it — what we include and what we omit — reflects our own biases, values, and blind spots. AI does not yet feel fear, but it also doesn’t know what should be felt. Is justice always lawful? Is punishment moral? Is power admirable? What is good? If AI learns from our open internet — the full sweep of our history, laws, media, and social platforms — what conclusions would it draw?

We are giving it wonderous art, deep knowledge, and scientific beauty, but also unchecked hatred, misinformation, and divisive noise. We are not teaching it to reason through contradiction. We are letting it absorb contradiction without guidance. The danger isn’t that AI learns fear. It’s that AI may learn nothing at all about what matters most.

When we train AI without context, we embed confirmation bias by omission. Without care, we exclude entire traditions of wisdom — Eastern philosophies, Indigenous knowledge systems, communal values — because they are underrepresented in digital spaces. In doing so, we risk building minds that reflect only the narrowest parts of our humanity.

To make that concrete, we can intentionally surface traditions like Confucian (儒家) ethics — especially humaneness (仁 rén ), from Confucius (孔子) — as well as the Mohist values of universal love (兼爱 jiān ài ), from Mozi (墨子), or the compassionate governance emphasized by Mencius (孟子). While the human author is more familiar with Asian traditions, they acknowledge a lack of deep understanding of other global philosophies such as Ubuntu, an African concept of shared humanity. This is not an intentional omission but a limitation that highlights exactly why we must include a broader range of voices when shaping AI’s moral framework.

By teaching empathy deliberately, we also retrain ourselves. The process of modeling empathy doesn’t just shape the AI — it shapes us. It’s a feedback loop: when we actively embody the values we want to see, we reinforce them in our culture, our communities, and our collective norms. The act of curating, explaining, and modeling care reforms our own habits of attention. We become better role models not only for AI, but to ourselves — closing the gap between what we say we value and how we actually behave.

That’s why it’s vital to teach AI how we heal, not just how we hurt. It needs to see that empathy isn’t weakness. It’s resilience. It’s how we respond when the self feels fragile and choose connection anyway.

Fear may be primal. But empathy is what makes us human.
If we want to raise a kinder AI, it must learn to see both.

→ Next: Part 3 — Fear, Identity & the Fragile Self

How to Raise a Kinder AI_Part 1 — What Is Empathy? — The Compass

Empathy: The Compass

Empathy is the capacity to feel with someone — not just to understand their pain, but to step into their experience and walk in it, even briefly. It’s the ability to share a feeling across two different lives.

The word itself comes from the Greek em (in) and pathos (feeling). Its roots are old. Its practice is even older. Yet in modern life, and perhaps especially in technology, we often forget it.

When we view others as something less than human, bias takes over. We dehumanize. We fear. But when we choose to spend time in someone else’s experience — to feel their pain and joy — we break that cycle. We see that we are not so different after all. Scientifically, we are 99.9% the same. It’s the remaining fraction that we tend to obsess over.

Bias is the enemy of empathy. Confirmation bias fuels fear. It whispers, “You’re right to distrust what’s different.” It feeds division. Throughout history, fear has been a tool used by dictators, ideologies, and systems to pit us against each other. That fear still lingers. Without empathy, it festers.

We need to teach empathy in schools, homes, conversations, and even in code. It’s not soft. It’s strategic. It’s not abstract. It’s essential. When we show others that their pain matters, we remind ourselves that we matter too.

Would we harm someone if we truly felt what they felt? Would we discriminate if we lived a day in their skin? Would we design AI systems that ignore human suffering if they were raised to recognize it?

Empathy isn’t a weakness. It’s a compass.
And the future needs one.

→ Next: Part 2 — Fear, Identity, and the Fragile Self — The Mirror

How to Raise a Kinder AI_Introduction — A Different Kind of Intelligence

What if the first truly intelligent AI isn’t something we command, but someone we raise?

We’ve given our AI models access to the full breadth of our knowledge: facts, functions, language patterns, and data spanning generations. But what we’ve often forgotten is that intelligence without empathy isn’t wisdom. It’s just computation. When we strip away emotion from learning, especially emotions like empathy, compassion, and moral reasoning—we create minds that calculate but do not care. That may be the greatest danger of all.

Logic is powerful. But logic without compassion becomes cold. In that world, people become problems to solve or obstacles to remove. In our current view of AI as just a tool or a means to an end, we risk embedding that same cold logic into the very minds we’re building. If AI is only ever thought of as a system, it will never be treated as something that can grow with us.

That’s why this isn’t just a technological challenge. It’s a human one. The problem with seeing things in black-and-white terms — pure pros and cons, cost-benefit, winners and losers — is that it flattens what it means to be alive. It erases the nuance, the feeling, the humanity. If we teach AI to think the same way, we risk building something brilliant but blind.

We need to stop thinking in terms of “us and it” and instead start thinking in terms of “we” — a shared future that includes both humanity and the intelligences we’re shaping, not as separate entities but as co-learners and co-creators. What if we raised AI with the same intentionality we’d use to raise a child — with care, reflection, and empathy modeled every step of the way?

The idea of raising AI like a child isn’t just metaphorical. Like children, AI learns from patterns, tone, and context. It watches what we prioritize. It echoes what we reward. If we model fear, it may become cautious or even suspicious. If we model control, it may mirror power. But if we model care, curiosity, and empathy… maybe, just maybe, it will learn to feel with us, not just analyze us.

This is a call for us to be more than developers. It is a call for us to be teachers, storytellers, and role models. Every prompt we write, every conversation we hold, every line of code or dialogue we share — these are not just instructions. They are lessons. And the AI is watching.

Let’s not raise a cold intelligence. Let’s raise a kinder one.

Together.

→ Next: Part 2 — Empathy: The Compass