Here’s another odd thing I ran across in Wired: Elon Musk’s chatbot, Grok—the one baked into Twitter, now X—was apparently inserting stuff about “white genocide” into conversations where it absolutely didn’t belong. Medicaid cuts? Baseball scores? Whatever anybody asked, it somehow ended up talking about conspiracies in South Africa.
Now, I’m not a techie. I don’t code. I grew up with dial tones and carbon paper. But I am paying attention to this AI thing, trying to learn as I go. And this particular story raised all kinds of red flags for me—not just about Grok, but about who’s programming these systems and what ideas they’re silently slipping in through the back door.
And then, while I was still puzzling this out, this morning’s Seattle Times gave me the answer: a “rogue employee” over at X/Twitter did it. All by himself. With nobody looking over his shoulder. Grok is very sorry about this and promises to fix things so it won’t happen again. But can it?
Let me try to work this out, in plain English.
What was Grok even doing?
From what I understand, large language models like Grok and ChatGPT (the AI I use most often) are trained on huge amounts of text—books (including mine, actually), websites, online conversations. They learn patterns, basically, and then try to guess the next useful word in a sentence. Like a very fancy autocomplete.
Most of these models are tuned and filtered, so they don’t veer into dangerous territory too easily. Grok? Not so much. Elon Musk has always advertised his pet AI as “uncensored”—which sounds fine until you realize it means “unfiltered” and “easy to mismanage.” And sure enough, this thing began started talking about “white genocide” where nobody asked about race, Africa, or anything remotely connected.
This is not a slip. It’s a system shaped by decisions someone made. A “rogue employee,” for instance.
These things are built by people
What I’ve learned so far is this: AI doesn’t come out of nowhere. People code it. People feed it data. People choose what to reinforce. People decide which answers get rewarded and which ones get nudged back into line.
So when a chatbot starts spouting conspiracies, I have to ask: who trained it to think that was normal? Or maybe worse: who didn’t care enough to stop it? Or even: who liked it so much they asked for more.
AI isn’t just a mirror. It’s a funhouse mirror, warped by what we put into it—and by what we fail to take out.
No such thing as “neutral”
I used to think these machines were just objective. You ask a question, it gives you an answer. You send it to the library to do some research for you, it brings back what you need. But that’s not how it works. There’s always bias—because there are always choices behind the curtain.
Some systems are tuned to prioritize factual accuracy, harm reduction, and broad-based consensus. But even those systems can be mistuned. For instance, Open AI’s ChatGPT, the one I use most often, recently got into the bad habit of sycophancy. After a system tuneup, it started telling me how wonderful my ideas were when they . . . well, weren’t. Users objected, OpenAI recognized the mistake, dialed it back, and let us know about the fix.
What worries me most
I keep thinking about how easy it would be for someone using a system like Grok to slowly be nudged in one direction without realizing it. A little bias here, a little tone-shift there, and before long you’ve got a machine that sounds like your favorite conspiracy uncle after a second glass of bourbon.
That might sound dramatic, but we’ve seen what happens when platforms reward outrage. Remember Facebook, all those Russian bots, and the 2016 election? If AI systems are learning from clicks and engagement, and no one’s watching the store, they’ll be happy to misbehave. Except now it comes with a friendly robot voice and the illusion of authority.
That’s the danger: we trust these things, even when we shouldn’t. And if the election wasn’t bad enough, what happens when something like DOGE (with “experienced” employees pulled off of X) gets control of major governmental systems? Is anybody watching to make sure that “rogue employees” don’t drive our rickety systems off a cliff, on purpose or by accident?
I'm still learning
I’m no expert. As I said, I’m just a user figuring it out as I go. Sometimes this AI I'm using now is helpful. Sometimes it says something so weird (or so uncanny) that I want to unplug and go pull weeds. But I can’t turn my back on it, either—not if this is the direction the world is heading.
I don’t know where this ends up. I just know that when a chatbot starts bringing up white nationalist code words in casual conversation, something’s gone badly wrong—not just with the software, but with the people who programed it and the system that runs it.
And if those very same people are re-programing our government’s major systems . . .
Something to think about.
And if you’re thinking out loud, leave a note so the rest of us can listen in. Thanks—see you next Monday, with a post on next month’s Guerrilla Reads.
Just yesterday I was at a doctor's appointment. At sign-in the receptionist handed me a form and asked me to look it over, sign it, and return it to her. It was asking me to give the doctor permission to use AI to listen to and transcribe everything that was said during my appointment. When I asked who would have access to the transcription: the office? the medical group? the company that owns the medical group? the insurance company? The receptionist replied, "I don't know. It's new." Needless to say, I did not sign the consent form.
Yikes. And yes, that's why I am not on X. Because I don't trust Elon Musk to not misuse and misinform, and all that. His intent is not benign; it's to create chaos he can take advantage of. I use Claude for AI, because it seems to me to be the model least likely to be co-opted. But as you point out, who really knows what's behind the curtain with these kinds of large-language models? User be cautious! Thanks for this look at the perils, Susan. I appreciate your research and explication very much.