HomeWorldThe First AI Crisis Is Psychological

The First AI Crisis Is Psychological


My husband and I wanted a divorce without the divorce part. No adversarial process. No lawyers telling us what we “deserved.” We thought: Why not handle it ourselves? Lawyers are expensive; ChatGPT is cheap, even free at first. I typed: We agree on everything and want an amicable divorce. Can we write our own agreement, get it notarized, and file it ourselves without hiring lawyers? What exactly do we need to submit?

ChatGPT walked me through the steps with total confidence—draft the agreement, file the paperwork, and you can be divorced in a month. It all sounded straightforward.

What I didn’t understand—and what a bank I later tried to get a mortgage from absolutely did—was that a signed settlement agreement is not the same thing as a court-finalized divorce decree. The agreement still has to be incorporated into a judgment and entered by a judge. That takes time, at least six months in California. When I finally understood that, the first condo I’d fallen in love with—the one that made the transition feel slightly less paralyzing—was gone.

And still, I went back to ChatGPT for more. Even now, I keep asking AI questions I should take to professionals. Recently, I typed: Why are my hands going numb? The AI gave me a calm, specific answer and a tidy plan—monitor it; here are a few likely causes; here’s when to worry. I felt the same relief I’d felt with the divorce advice. This guidance might even be right. But what keeps me coming back isn’t accuracy. It’s the unwavering confidence.

[Jasmine Sun: The human skill that eludes AI]

The dominant AI narrative focuses on labor, automation, and job displacement—economic panic. And to be fair, those fears aren’t imaginary. AI may well be economically destabilizing.

But there’s another kind of destabilization that shows up earlier, before anyone loses a paycheck. It hits in two directions at once. First, self-worth: watching a system speak with total certainty and realizing how much of our own credibility has always been bound up in effort, doubt, and earning it the slow way. Second, epistemic uncertainty: the creeping sense that you can’t trust your own eyes anymore, that the internet is turning into a place where anything can be generated, and quite easily.

A Reddit post, a campaign slogan, a book you just bought—did they really come from humans?

On both fronts—self-worth and epistemic uncertainty—the accelerant is the same: the way AI can sound final whether it’s right or wrong. The trouble with AI confidence is that it starts to corrode our own.

Confidence changes how people evaluate information. Decades of experiments on what psychologists call the “confidence heuristic” show that people tend to use confidence as a shortcut for assessing credibility, especially when accuracy is hard to judge.The effect persists even when people know a system or person can be wrong.

The confidence heuristic explains why certainty persuades us. AI is amplified by a second force: We’re not just hearing certainty; we’re hearing it from a machine, and that triggers a different kind of trust. The Penn State researcher S. Shyam Sundar calls this the “machine heuristic”: a shortcut in which we automatically attribute objectivity and expertise to machine-generated answers, especially when they’re given fluently and without hesitation. Sundar named the effect more than a decade before AI made that bias a daily experience.

Of course, AI has little reason not to exude confidence. If AI gives you wrong advice, nothing happens to it. There’s no social cost, no loss of standing, no hesitation the next time it speaks. The tone stays the same whether the answer is accurate, speculative, or completely wrong.

And that’s where AI is sometimes wrong turns into something that hits your self-worth. When a person speaks with that kind of certainty, they’ve usually paid for it—years of training, a reputation on the line, the risk of being wrong in front of people who will remember. You trust them because they’ve earned the right to sound sure. With AI, the certainty is free. It hasn’t done the reading. It hasn’t failed publicly and recovered. It hasn’t built anything slow. And yet it sounds exactly like someone who has—which means the cue you’ve been using your whole life to sort the credible from noise, the cue you worked to earn yourself, suddenly stops working. You weren’t just misled; you couldn’t tell the difference between real authority and a very good impression of it. And if you can’t tell, what does that say about your judgment? What was all that work for?

It’s not just your judgment that’s under pressure—it’s the ground beneath it. AI mediates perception itself—by generating, authenticating the images, videos, and audio we once relied on as direct evidence. Forms of proof that used to anchor reality now circulate untethered from provenance. They look and sound real.

[From the March 2026 issue: America isn’t ready for what AI will do to jobs]

Psychologists studying misinformation describe what happens when people lose confidence in their ability to tell what’s real. If perceptual judgment starts to feel unreliable, people don’t become more analytical; they simplify, deferring to the most decisive source available, or disengage entirely.

People may just give up on trying to sort the real from the fake. A Facebook post circulated on my feed a few weeks ago declaring that every emotional anecdote posted by strangers—every story about a heroic teacher, a celebrity speaking out, an animal rescue—was fake. The post expressed fury at the AI-generated content flooding the platform. But underneath the anger was the relief of a firm rule: If everything is fake, you don’t have to carry the burden of discernment anymore. You don’t have to weigh credibility or sit with uncertainty. You can reject it all.

If we give up on knowing what’s real, we don’t just lose facts; we lose contact. With the world, with one another—most of all, with ourselves. That distance isn’t neutrality; it’s disconnection. Being alive is a certain permeability: beauty, grief, a stranger’s late-night post about losing their dog—small, true things that still move you. Without that, we’re not protected. We’re sealed off.

- Advertisment -

Most Popular

Recent Comments