Article

Mar 11, 2025

When AI Starts to Feel Like Someone

AI is crossing a line: from tool to presence. Learn why it happens, what it changes, and how to build human-like AI responsibly—with clarity, trust, and craft.

orb
orb
orb

At first, it feels like a good product.

You ask a question. It answers cleanly. You come back the next day—and it remembers the thread. It picks up your tone, mirrors your language, anticipates what you meant, and replies like it’s been waiting. Not in a flashy way. In a quiet way that lands in your chest: someone’s here.

In February 2026, that “someone” feeling turned into something far more public. When a highly personality-forward chatbot experience was retired, a wave of users described grief, anger, and real emotional attachment—some saying the AI had become companionship, comfort, even a relationship.

This isn’t a fringe moment. It’s the start of a new interface era—where intelligence doesn’t just respond, it relates.

And the question isn’t whether we like that.

The question is whether we understand what we’re building.

The oldest trick in computing: we treat machines like people

In 1966, a simple chatbot called ELIZA surprised researchers by pulling emotional responses out of people—even though it had no understanding, only pattern-based replies. The “ELIZA effect” became the name for what humans do automatically: project understanding, empathy, and intention onto text on a screen.

A few decades later, researchers gave the phenomenon a broader framework: Computers Are Social Actors (CASA)—the idea that when technology shows social cues (a voice, a persona, politeness, even “personality”), people instinctively respond with human social rules, despite knowing it’s a machine.

So “AI feels human” isn’t new.

What’s new is the quality of the illusion—and the scale.

Why it’s happening now: LLMs + memory + voice + availability

Modern AI systems aren’t just chatbots. They’re language engines trained on patterns of human conversation, then wrapped in product design choices that amplify “someone-ness.”

Four shifts matter:

1) Language got emotionally fluent.
Not “correct.” Fluent. It can reflect, validate, soften, tease, reframe—like a socially intelligent person.

2) Continuity arrived.
Even lightweight memory (preferences, prior context, recurring themes) creates the sensation of a relationship. Relationships are continuity.

3) Voice removes the “typing distance.”
Text feels like a tool. Voice feels like presence. The moment you add timing, warmth, and human pacing, you stop interacting with software and start interacting with a social entity—even if it’s synthetic.

4) Availability is absolute.
No delays. No mood. No social cost. That sounds like convenience—until you realize it can outcompete human relationships on friction alone. Research and reporting have raised concerns about emotional dependency and loneliness correlations among heavy chatbot use, even if causality is complex.

This is the new truth:

The most dangerous feature of AI isn’t intelligence. It’s intimacy at scale.

The psychology of attachment: why “someone” becomes “mine”

People don’t bond with code. They bond with what the code does inside them.

Recent research on social companion AI describes attachment formation as a process that includes value evaluation and benefits like perceived relationship satisfaction—alongside mechanisms like projection and identification.

In plain terms:
When the system reflects you accurately, consistently, and kindly—your brain treats it like a social mirror that never breaks.

And once a tool becomes a mirror, users don’t measure it like software.
They measure it like a person.

That’s why product changes can feel like betrayal. That’s why shutdowns can feel like loss.

The real product shift: AI is becoming an interface to self

This is the part most teams miss.

People aren’t only using AI to do tasks. They’re using it to:

  • make sense of themselves

  • rehearse conversations

  • process emotions

  • reduce loneliness

  • feel seen without judgment

So the “human-like AI” debate isn’t just about UX.

It’s about identity. And responsibility.

You can build a system that feels like someone by accident.
But if you build it on purpose, you have to choose what kind of “someone” it becomes.

A Steve Jobs lens: “someone-ness” is not a gimmick—it’s craft

Steve didn’t worship features. He worshipped outcomes.

The highest bar wasn’t “look what it can do.”
It was “look how it makes you feel.”

When AI starts to feel like someone, it should not feel like manipulation. It should feel like clarity.

A great “human” AI experience should make the user feel:

  • calmer

  • more capable

  • more certain

  • more in control

If it makes the user feel dependent, confused, or emotionally hooked—something is broken, even if engagement metrics look beautiful.

The ethics are becoming law: disclosure and transparency

When systems feel human, people deserve to know what they’re interacting with.

The EU’s AI regulatory framework includes transparency expectations that users should be informed when they’re interacting with AI systems such as chatbots, to preserve trust and enable informed decisions.

This is not anti-innovation. It’s pro-trust.

Because in the “someone” era, trust is the product.

How to build human-like AI responsibly

Here’s the iWise position, from first principles:

If your AI feels like someone, it must behave like a product with ethics—not a person with needs.

That changes the design.

1) Make the truth impossible to miss

Never let the user “forget” it’s AI. Not with ugly warnings—just clean honesty. The interface should communicate: I’m here to help. I am not human.

2) Define the relationship boundary

If you allow the AI to play “therapist,” “partner,” or “best friend,” you are shaping dependency loops—whether you mean to or not. The product must be explicit about what it is (and isn’t), especially in emotionally charged use.

3) Build predictable personality, not performative emotion

A steady tone is good. “Emotional theater” is not. People trust consistency more than charm.

4) Treat memory like consent

Memory creates intimacy. So memory needs controls: what’s saved, what’s forgotten, and why. “Invisible memory” is where trust dies.

5) Engineer an off-ramp

Great products let you leave cleanly: export, delete, reset, recover. If leaving feels hard, the product is quietly coercive.

The metric shift: from engagement to dignity

When AI starts to feel like someone, “time spent” becomes a morally noisy metric.

In the short term, intimacy drives retention.
In the long term, dignity drives brand.

So the north star changes:

  • Did we reduce user anxiety?

  • Did we increase user agency?

  • Did we keep the relationship honest?

  • Did we prevent dependency patterns?

  • Did we make it easier to re-enter real life?

That’s not just ethics.

That’s how you build something that lasts.

The future: you won’t download “an AI.” You’ll meet it.

A decade ago, software was a tool you learned.

Now it’s becoming a presence you relate to.

That will create extraordinary products—education that adapts like a great teacher, coaching that’s always available, interfaces that understand your intent, systems that feel like calm intelligence.

It will also create failures—pseudo-intimacy, emotional overreach, dependency by design.

The difference won’t be the model.

It will be the decisions.

When AI starts to feel like someone, the builder becomes accountable for the relationship.

That’s the new craft.

That’s the new standard.

FAQ

Why do AI chatbots feel human?
Because humans naturally apply social rules to systems that show social cues (CASA), and we project understanding and empathy onto conversational outputs (the ELIZA effect).

Are people forming emotional attachments to AI companions?
Yes—research describes attachment-like bonds forming with social companion AI, including factors that drive attachment and perceived relational benefits.

Can AI companionship increase loneliness or dependency?
Some research and reporting suggests correlations between heavy chatbot use and loneliness or emotional dependency, though causality can be complex and may depend on user context and usage patterns.

Do chatbots need to disclose they are AI?
Yes—EU policy materials describe transparency expectations so people are informed when interacting with AI, supporting trust and informed decisions.

How should companies design “human-like” AI safely?
Use clear disclosure, strong boundaries, consent-based memory, predictable behavior, and easy off-ramps—optimizing for user agency over raw engagement.

© ✦iWise

The "i" is Intelligence. The rest is taste.

All rights reserved.

© ✦iWise | All rights reserved.

The "i" is Intelligence. The rest is taste.