If you’ve ever used a chatbot, you’ve probably noticed they’re not neutral. That’s because all AI systems are built on messy data scraped from the internet, then adjusted by humans who carry their own perspectives. Bias seeps in at every stage.
Now, bias itself isn’t shocking — we’re human, after all. What gets complicated is how those biases show up when you’re using AI in real time. That’s where it gets scary.
I feel a few ways about AI.
The Good: AI has the power to improve accessibility and quality of life.
The Bad: It carries the bias of the data it was trained on.
The Scary. AI is already influencing the decisions we make, from what news we trust to how we view the world.
No story captures this tension better than Elon Musk’s AI chatbot, Grok. Musk has said it should be “politically neutral” and “maximally truth-seeking.” But new reporting from The New York Times shows the reality is different: Grok has been manually tweaked to lean more conservative, sometimes overnight, based on Musk’s personal frustrations.
Here’s one example from the NYT: A user asked Grok what the biggest threat to Western civilization was. The bot first answered, “misinformation and disinformation.” Musk’s response to an X user? “Sorry for this idiotic response…will fix in the morning.” And sure enough, the next day Grok had a new answer, falling fertility rates — one of Musk’s long-standing obsessions.
That is not neutrality. That’s programming.
And it raises the question: If one billionaire can pull the strings on an AI that millions of people use, what does that mean for the future of truth itself?
This isn’t the first time a tech leader has raised alarms. A little over two years ago, OpenAI’s founder Sam Altman testified before Congress, asking lawmakers for stronger regulation. Here’s what he said back then: “My worst fear is that we cause significant — we the technology industry cause significant harm to the world.”
He went on to warn, “It’s one of my areas of greatest concern, the more general ability of these models to manipulate to persuade to provide sort of one-on-one interactive disinformation.”
Those words sound eerily relevant right now.
But here’s the twist: Altman and others in the AI industry now frame the technology very differently. The same tools they once called risky and dangerous are suddenly marketed as critical to America’s global dominance. Regulation, once deemed necessary, is now dismissed as a threat to competitiveness.
Meanwhile, the partisan fight over “woke AI” is heating up. Former President Trump even issued an executive order requiring “ideological neutrality” in federal AI, saying, “The American people do not want woke Marxist lunacy in the A.I. models.”
Researchers point out that most major chatbots, including OpenAI’s ChatGPT, lean slightly left on political tests — often because training data reflects global perspectives. But as Musk’s Grok shows us, it’s not just the data that matters. It’s the people behind the curtain making the edits.
That’s the real danger. AI doesn’t have a mind of its own. It’s a puppet — and the puppeteers are the ones with power.
The moral of the story is simple: never fully trust a chatbot. Because when bias becomes programmable, truth itself is up for grabs.
Lindsey Granger is a News Nation contributor and co-host of The Hill’s commentary show “Rising.” This column is an edited transcription of her on-air commentary.