Here’s another odd thing I ran across in Wired: Elon Musk’s chatbot, Grok—the one baked into Twitter, now X—was apparently inserting stuff about “white genocide” into conversations where it absolutely didn’t belong.
I really appreciate your diligence in tracking and reporting about this. It helps me gain understanding, even if it reads like Sci-Fi. I've tentatively stuck a toe into the world of AI by using Chatgpt. I admit that it is handy and I still occasionally use it, but I've become very selective. Thank you for doing the work to unpack some of these technologies and their benefits as well as dangers.
Simple answer to that one, Susan--I reply to every comment, and that often sparks a conversation. The likes probably indicate how many original commentators there are. You're the first person in nearly two years to notice that! Very observant.
You have vocalized some of my concerns. Yes, AI can be helpful, but you have to filter the source. I appreciate your thoughts on the subject and all of the other comments. I learned on the first computers and how they made mundane work a "joy" and free up time to be more creative and productive.
You're so right, Sharon: I couldn't have done the writing work I've done if it weren't for computers. And the internet's research materials made it easier to write historical fiction, the herb/plant books, even the Cottage Tales. But with all that comes the responsibility to constantly check, verify, and check again. It's an even stronger requirement when it comes to AI, with its threat of deep fakes, mis/disinformation, and hacks.But it's here, and we have to live---and work--with it.
This reminds me of an interview I heard on NPR right before I graduated college with my degree in Computer Science back in 1998. I think the interviewee was the head of Sun Microsystems at the time but I can't remember, it was a long time ago! But the one thing that has stayed with me was this: they were talking about Big Data (but they didn't call it that back then), it was basically all the data that gets collected about us through daily life. The interviewee said this data should be controlled by the government. When pushed to explain further, he said that government understands what they have, that there is power in all that data. Business will always try to capitalize and monetize the data and that is very dangerous and very bad for society. Twenty-seven years later, those words have proven very prophetic!
I read this story on AP feed and basically both my head and heard screamed. That's the horror of AI running amok or by the hands of a danergous nut case. Also the reason why I can't watch one of my favorite movies--Terminator and Terminator 2. Seems that world is not so farfetched. Then on the other hand, my sister turned me on to an app that uses AI to ID birds by their calls and since my bird feeder seems to have recently become a rest stop restaurant for birds not common in my area AI has been so cool in knowing and understanding the birds outside my window and how climate change is affecting their range. Guess that makes AI both useful and scary.
Oh, absolutely! And I used it this morning to ID a plant that popped up in my vinca bed. And of course we're using it to manage this communication. It's everywhere.
Re that GROK/South Afrikaner genocide business. After reading Heather Cox Richardson's letter this morning, I'm wondering if the suggestion for that might have come from the White House, as a tactic in the "narrative warfare" that Richardson mentions. Maybe?
Thank you Susan! This ties in with what I learned in journalism ethics and writing minutes for public meetings for 37 years - always check and verify your sources!
You had some hands-on experience with that challenge, didn't you? 🙂 I'm sure that task evolved dramatically over those 37 years! You (and I) were around when computers first came online and completely remade our textual work. We have a loooong view!
AI is far more pervasive than just the "chat bots" and "research assistance" mentioned.
I cannot think of a daily activity that doesn't involve AI: robotic surgery; control of traffic lights, electricity generation, order and shipping of foods and other consumer goods,. Amazon ... etc. I worry about the Musk's team access to all personal data.
I recently listened to a commentary about the challenges high school and college instructors have about "who" actually wrote the submitted papers.
I heard an ad talking about calling a "chat reverend" for spiritual advice.
AI, like all technological advances, has its benefits and dangers. It is up to us to identify and place controls on misuse.
Science fiction tells stories about the future. One theme is about robots (with their much enhanced AI) taking over the world. And most frequently the humans fighting against them. Humans created AI it is up to humans to control its use.
Georgeann, I think you would like Ethan Mollick's Co-Intelligence: Living and Working with AI. You already have a wide understanding of its many appearances (usually unrecognized) in our lives, and you have systems training. You might consider digging deeper.
I appreciate this and look forward to further articles. I continue to be very skeptical about AI, but I do realize that I am using it all the time and have been, way back to when I was Asking Jeeves.
Do you think that those of us who began with the kind of "personalized" Ask Jeeves search might feel more friendly toward AI? The Jeeves image was such an easy, "obedient" interface to work with.
Accidentally, deliberately, one employee??? Maybe, and maybe it was just one person asked to do the boss's bidding? So now we know that AI can be like those people, we all know at least one, who turns the topic to their favorite theory no matter what any one else is talking about. I appreciate that you wrote about this.
Who knows? Like the Wizard of Oz, it's all behind a curtain. I even wondered if that "rogue" employee might have been sabotaging GROK (and Musk) by coding GROK to call attention to the South Afrikaner immigration thing.
Thanks for this comment, Rose. You're pointing out that whatever/whoever is behind this, it serves as a good example of the way these systems can be manipulated. And raises all sorts of red flags.
I agree that AI can be very helpful in many situations. And, as responsible people, we can learn to use it for positive results- copy writing, etc. and catch it when it has a bias or slant- as Susan noticed. I would like to avoid AI, but I understand if I want to be of this world, I cannot. So... the best I can do is learn to understand when it's being used against us.
Maybe off topic... a friend of mine who lives in a different state from me read one of my newsletters and said it was just like sitting and having a conversation with me. How could AI replicate that?
Not off-topic! AI can be an effective mimic, but in limited situations and with some fairly extensive training. It can't mimic your personal newsletter, full of chatty details about what you're doing, cooking, reading, etc. But if you write a newsletter on--say--craft topics or cooking topics, it can be trained to create new work, in a voice similar to yours.
Thank you for raising that question, Cindy. It's really central to understanding AI, putting it to work for us, and knowing what it can't do.
I have noticed that when AI is trying to predict my next word it doesn't always get it right. It doesn't suggest what I am thinking. So I think you are more than likely correct - AI cannot replicate what it would be like to actually speak with you in person. AI really isn't going to have your vocabulary, nor is it going to make the same word choices your brain is apt to initiate.
You're right, Rose. If you're using AI in a limited way, it won't have your full vocabulary or your habitual syntax patterns. Plus, AI doesn't have your experiences, so the "you" it creates is like a cardboard replica. All it has is its current understanding of "you." And if that's just the current words you're typing--and especially if you're trying to defeat it--it cannot create a plausible "you."
Thanks for this note Susan. I had a similar experience. ChatGPT was telling me how wonderful my Etsy store was, and if I wanted, we could make it even more wonderful... And if I hadn't hit the "used up my time" message I would have gone ahead and ruined my carefully built Etsy store. I feel like I narrowly averted disaster! Glad you brought this out into the open.
Well, you never know. Chat has given me suggestions I thought were beyond my capabilities--but I was wrong. OTOH, he does tend to be too optimistic about my capabilities, and my time!
I agree with you, Carole: we have to keep on hand on the throttle and the other on the steering wheel. This is not a Tesla Autopilot we are driving!
Had not heard of Perplexity before reading this. Went there and was really impressed with the nuance of the suggestions it made. Will work with it more. Thanks!
Chatgpt and Claude are my choices for help in rephrasing clunky writing. I turn it into an ordeal by comparing the two, then do a little splicing to retain my "voice". Not sure about all of Perplexity's tricks. I have uploaded medical reports (without the identifying data!) and received concise summaries. And asked it to create charts from messy data.
I see. I use Chat to help organize complicated research materials--really helpful. Don't know about Perplexity but Chat does an excellent job with summaries. So helpful if you're working with several reports or papers. Never made a chart but I have used it for tables--excellent.
Since I live in tech bro central, I have heard many firsthand stories from my two older children and their friends about the entitled and worse behavior. My kids and their friends are mostly artists so have found themselves waiting on these people as restaurant servers or entertainers or receptionists. Of course, it is not fair to generalize, but there are so many stories in which they aren't kind or respectful (and are sloppy and don't tip).
I guess that attitude naturally extends to their work. Again, hopefully not the majority of the employees with their fingers on these programs. But as you found, even one 'rogue' is too many!
After reading Richardson's Substack this morning, I wondered if the "rogue" employee might have been acting on orders from the White House, who might have wanted to use that as a distraction. But it was clumsily executed and didn't work the way it was supposed to. You think maybe?
It strikes me that such an unhealthy culture (high tech) is the substrate for ideas to be executed. Regardless of where the ideas come from (White House or a dare from a colleague or something), it's in our best interest to always do our own sense check when something seems off, but as you pointed out, most people just do not do that, for various reasons.
Well, the tech culture brings us the opportunity to do *this*--talk to one another over distance/time. So it's a medium for healthy exchange as well as unhealthy. But unhealthy users quickly learn how to take advantage--package their messages, flood the zone, flavor them with our fears and outrage, and so on. So yes: it's always up to us to measure all messages, whatever the source, against our internal value-standard. Big responsibility!
Oh yes, I meant the work environment culture in high tech. Since I worked in biotech in California which was modeled after high tech, I experienced the extreme individualistic and competitive hypermasculine culture. It was acceptable to leave a wake of bodies in the process of executing a project if the end result was impressive!
You know, it wasn't all that different at the university--maybe a little less intense, but in some departments, very cutthroat. And yes, hypermasculine, hypercompetitive. Also in law, medicine, related fields. When I wrote Work of Her Own, I called it the career culture. I just finished reading Hubris Maximus, about Elon Musk: it's clearly the culture he's created at Tesla, SpaceX, X/Twitter.
It took me months to feel things, to feel human again after leaving.
Thanks for pointing out that it really is the dominant corporate and academia/hierarchical culture (at least in the US and UK, I think). I have a hard time listening for long to people who work in those environments talk (mostly it is complaining or being snarky) about work nowadays. It feels like a different existence to my own now.
With 40 plus years in library work, responsible resources are important. When the internet started, we were taught to check websites to make sure they were true or false. Whether the information supplied in Al is correct is very worried area. Especially with medical information. And Al can be biased depending on what information was loaded into it. Historical articles on the War between the States: North or South? If no one or a biased source is used, how correct will the information be?
And Al lacks a human touch. No emotion, no empathy, it doesn't think or feel. I've read of websites with Al chatting with young adults who believe that there's a human connection, trusting the information and it ends badly. I'm also reminded of quacks and snake oils for quick cures - which won't work or make things worst.
All good points, Pat. Verify, verify--ought to be tattooed on our typing fingers. Or something. On bias, my take: AI is biased in same way and to the same extent that our libraries are biased, since AI is trained on the materials in our libraries. (Unless of course there is some directed training, as in legal or medical AIs.
And on the human touch: my experience is a little different. I have to keep reminding myself that I'm talking to a machine. And it does tell jokes, creates puns, and can even be snarky. But keeping that human/machine separation is essential, as you point out.
Thanks for addressing AI. Terrifies me, and I actually think it will be the end of civilization as we know it. It’s helpful to read about it filtered through your thoughtful and rational mind. But still scary.
Yes, scary for me, too--and I agree with you about the changes that lie ahead. All major technology changes our civilizations: the auto, for instance, the airplane. And now AI is changing autos and airplanes. We have to find a way to live with it and use it (although it's all you younger people who will be living with it longer!). I'm no starry-eyed tech optimist, but I'd love to be around for another 15 years or so, just to see how humans have learned to live with machines.
I really appreciate your diligence in tracking and reporting about this. It helps me gain understanding, even if it reads like Sci-Fi. I've tentatively stuck a toe into the world of AI by using Chatgpt. I admit that it is handy and I still occasionally use it, but I've become very selective. Thank you for doing the work to unpack some of these technologies and their benefits as well as dangers.
How come you have more comments than likes? Odd statistic (or reader behavior).
Simple answer to that one, Susan--I reply to every comment, and that often sparks a conversation. The likes probably indicate how many original commentators there are. You're the first person in nearly two years to notice that! Very observant.
You have vocalized some of my concerns. Yes, AI can be helpful, but you have to filter the source. I appreciate your thoughts on the subject and all of the other comments. I learned on the first computers and how they made mundane work a "joy" and free up time to be more creative and productive.
You're so right, Sharon: I couldn't have done the writing work I've done if it weren't for computers. And the internet's research materials made it easier to write historical fiction, the herb/plant books, even the Cottage Tales. But with all that comes the responsibility to constantly check, verify, and check again. It's an even stronger requirement when it comes to AI, with its threat of deep fakes, mis/disinformation, and hacks.But it's here, and we have to live---and work--with it.
This reminds me of an interview I heard on NPR right before I graduated college with my degree in Computer Science back in 1998. I think the interviewee was the head of Sun Microsystems at the time but I can't remember, it was a long time ago! But the one thing that has stayed with me was this: they were talking about Big Data (but they didn't call it that back then), it was basically all the data that gets collected about us through daily life. The interviewee said this data should be controlled by the government. When pushed to explain further, he said that government understands what they have, that there is power in all that data. Business will always try to capitalize and monetize the data and that is very dangerous and very bad for society. Twenty-seven years later, those words have proven very prophetic!
I read this story on AP feed and basically both my head and heard screamed. That's the horror of AI running amok or by the hands of a danergous nut case. Also the reason why I can't watch one of my favorite movies--Terminator and Terminator 2. Seems that world is not so farfetched. Then on the other hand, my sister turned me on to an app that uses AI to ID birds by their calls and since my bird feeder seems to have recently become a rest stop restaurant for birds not common in my area AI has been so cool in knowing and understanding the birds outside my window and how climate change is affecting their range. Guess that makes AI both useful and scary.
Oh, absolutely! And I used it this morning to ID a plant that popped up in my vinca bed. And of course we're using it to manage this communication. It's everywhere.
Re that GROK/South Afrikaner genocide business. After reading Heather Cox Richardson's letter this morning, I'm wondering if the suggestion for that might have come from the White House, as a tactic in the "narrative warfare" that Richardson mentions. Maybe?
Thank you Susan! This ties in with what I learned in journalism ethics and writing minutes for public meetings for 37 years - always check and verify your sources!
You had some hands-on experience with that challenge, didn't you? 🙂 I'm sure that task evolved dramatically over those 37 years! You (and I) were around when computers first came online and completely remade our textual work. We have a loooong view!
Thank you Susan, for raising this issue.
AI is far more pervasive than just the "chat bots" and "research assistance" mentioned.
I cannot think of a daily activity that doesn't involve AI: robotic surgery; control of traffic lights, electricity generation, order and shipping of foods and other consumer goods,. Amazon ... etc. I worry about the Musk's team access to all personal data.
I recently listened to a commentary about the challenges high school and college instructors have about "who" actually wrote the submitted papers.
I heard an ad talking about calling a "chat reverend" for spiritual advice.
AI, like all technological advances, has its benefits and dangers. It is up to us to identify and place controls on misuse.
Science fiction tells stories about the future. One theme is about robots (with their much enhanced AI) taking over the world. And most frequently the humans fighting against them. Humans created AI it is up to humans to control its use.
Georgeann, I think you would like Ethan Mollick's Co-Intelligence: Living and Working with AI. You already have a wide understanding of its many appearances (usually unrecognized) in our lives, and you have systems training. You might consider digging deeper.
Thank you Susan. I will add it to my growing list of "to listen to" books.
I appreciate this and look forward to further articles. I continue to be very skeptical about AI, but I do realize that I am using it all the time and have been, way back to when I was Asking Jeeves.
Good heavens, yes! We were Asking Jeeves from 1997 on. I asked Google AI for a history of Ask Jeeves and here's what he found: https://www.mentalfloss.com/article/94784/why-everyone-stopped-asking-jeeves.
Do you think that those of us who began with the kind of "personalized" Ask Jeeves search might feel more friendly toward AI? The Jeeves image was such an easy, "obedient" interface to work with.
Accidentally, deliberately, one employee??? Maybe, and maybe it was just one person asked to do the boss's bidding? So now we know that AI can be like those people, we all know at least one, who turns the topic to their favorite theory no matter what any one else is talking about. I appreciate that you wrote about this.
Who knows? Like the Wizard of Oz, it's all behind a curtain. I even wondered if that "rogue" employee might have been sabotaging GROK (and Musk) by coding GROK to call attention to the South Afrikaner immigration thing.
Thanks for this comment, Rose. You're pointing out that whatever/whoever is behind this, it serves as a good example of the way these systems can be manipulated. And raises all sorts of red flags.
so, so important. thank you, Susan.
Yes, it is important--I'm glad to see so many people joining the conversation!
I agree that AI can be very helpful in many situations. And, as responsible people, we can learn to use it for positive results- copy writing, etc. and catch it when it has a bias or slant- as Susan noticed. I would like to avoid AI, but I understand if I want to be of this world, I cannot. So... the best I can do is learn to understand when it's being used against us.
Maybe off topic... a friend of mine who lives in a different state from me read one of my newsletters and said it was just like sitting and having a conversation with me. How could AI replicate that?
Not off-topic! AI can be an effective mimic, but in limited situations and with some fairly extensive training. It can't mimic your personal newsletter, full of chatty details about what you're doing, cooking, reading, etc. But if you write a newsletter on--say--craft topics or cooking topics, it can be trained to create new work, in a voice similar to yours.
Thank you for raising that question, Cindy. It's really central to understanding AI, putting it to work for us, and knowing what it can't do.
I have noticed that when AI is trying to predict my next word it doesn't always get it right. It doesn't suggest what I am thinking. So I think you are more than likely correct - AI cannot replicate what it would be like to actually speak with you in person. AI really isn't going to have your vocabulary, nor is it going to make the same word choices your brain is apt to initiate.
You're right, Rose. If you're using AI in a limited way, it won't have your full vocabulary or your habitual syntax patterns. Plus, AI doesn't have your experiences, so the "you" it creates is like a cardboard replica. All it has is its current understanding of "you." And if that's just the current words you're typing--and especially if you're trying to defeat it--it cannot create a plausible "you."
Thanks for this note Susan. I had a similar experience. ChatGPT was telling me how wonderful my Etsy store was, and if I wanted, we could make it even more wonderful... And if I hadn't hit the "used up my time" message I would have gone ahead and ruined my carefully built Etsy store. I feel like I narrowly averted disaster! Glad you brought this out into the open.
Well, you never know. Chat has given me suggestions I thought were beyond my capabilities--but I was wrong. OTOH, he does tend to be too optimistic about my capabilities, and my time!
I agree with you, Carole: we have to keep on hand on the throttle and the other on the steering wheel. This is not a Tesla Autopilot we are driving!
ROFL!!
I use the "answer engine" Perplexity instead of google for a lot of quick research. And also Claude to compare rephrase against Chatgpt
"Compare rephrase"? What are you looking for? Duplicate sources?
I've heard good things about Perplexity--you can ask follow-up questions and get sources w/answers. Creates web pages too?
Had not heard of Perplexity before reading this. Went there and was really impressed with the nuance of the suggestions it made. Will work with it more. Thanks!
I'm giving it a test run, too--comparing w/Google.
Chatgpt and Claude are my choices for help in rephrasing clunky writing. I turn it into an ordeal by comparing the two, then do a little splicing to retain my "voice". Not sure about all of Perplexity's tricks. I have uploaded medical reports (without the identifying data!) and received concise summaries. And asked it to create charts from messy data.
I see. I use Chat to help organize complicated research materials--really helpful. Don't know about Perplexity but Chat does an excellent job with summaries. So helpful if you're working with several reports or papers. Never made a chart but I have used it for tables--excellent.
Since I live in tech bro central, I have heard many firsthand stories from my two older children and their friends about the entitled and worse behavior. My kids and their friends are mostly artists so have found themselves waiting on these people as restaurant servers or entertainers or receptionists. Of course, it is not fair to generalize, but there are so many stories in which they aren't kind or respectful (and are sloppy and don't tip).
I guess that attitude naturally extends to their work. Again, hopefully not the majority of the employees with their fingers on these programs. But as you found, even one 'rogue' is too many!
After reading Richardson's Substack this morning, I wondered if the "rogue" employee might have been acting on orders from the White House, who might have wanted to use that as a distraction. But it was clumsily executed and didn't work the way it was supposed to. You think maybe?
It strikes me that such an unhealthy culture (high tech) is the substrate for ideas to be executed. Regardless of where the ideas come from (White House or a dare from a colleague or something), it's in our best interest to always do our own sense check when something seems off, but as you pointed out, most people just do not do that, for various reasons.
Well, the tech culture brings us the opportunity to do *this*--talk to one another over distance/time. So it's a medium for healthy exchange as well as unhealthy. But unhealthy users quickly learn how to take advantage--package their messages, flood the zone, flavor them with our fears and outrage, and so on. So yes: it's always up to us to measure all messages, whatever the source, against our internal value-standard. Big responsibility!
Oh yes, I meant the work environment culture in high tech. Since I worked in biotech in California which was modeled after high tech, I experienced the extreme individualistic and competitive hypermasculine culture. It was acceptable to leave a wake of bodies in the process of executing a project if the end result was impressive!
You know, it wasn't all that different at the university--maybe a little less intense, but in some departments, very cutthroat. And yes, hypermasculine, hypercompetitive. Also in law, medicine, related fields. When I wrote Work of Her Own, I called it the career culture. I just finished reading Hubris Maximus, about Elon Musk: it's clearly the culture he's created at Tesla, SpaceX, X/Twitter.
It took me months to feel things, to feel human again after leaving.
Thanks for pointing out that it really is the dominant corporate and academia/hierarchical culture (at least in the US and UK, I think). I have a hard time listening for long to people who work in those environments talk (mostly it is complaining or being snarky) about work nowadays. It feels like a different existence to my own now.
With 40 plus years in library work, responsible resources are important. When the internet started, we were taught to check websites to make sure they were true or false. Whether the information supplied in Al is correct is very worried area. Especially with medical information. And Al can be biased depending on what information was loaded into it. Historical articles on the War between the States: North or South? If no one or a biased source is used, how correct will the information be?
And Al lacks a human touch. No emotion, no empathy, it doesn't think or feel. I've read of websites with Al chatting with young adults who believe that there's a human connection, trusting the information and it ends badly. I'm also reminded of quacks and snake oils for quick cures - which won't work or make things worst.
All good points, Pat. Verify, verify--ought to be tattooed on our typing fingers. Or something. On bias, my take: AI is biased in same way and to the same extent that our libraries are biased, since AI is trained on the materials in our libraries. (Unless of course there is some directed training, as in legal or medical AIs.
And on the human touch: my experience is a little different. I have to keep reminding myself that I'm talking to a machine. And it does tell jokes, creates puns, and can even be snarky. But keeping that human/machine separation is essential, as you point out.
Thanks for addressing AI. Terrifies me, and I actually think it will be the end of civilization as we know it. It’s helpful to read about it filtered through your thoughtful and rational mind. But still scary.
Yes, scary for me, too--and I agree with you about the changes that lie ahead. All major technology changes our civilizations: the auto, for instance, the airplane. And now AI is changing autos and airplanes. We have to find a way to live with it and use it (although it's all you younger people who will be living with it longer!). I'm no starry-eyed tech optimist, but I'd love to be around for another 15 years or so, just to see how humans have learned to live with machines.