Hello, friends—I’m on sabbatical until the first of September, but I thought I’d touch base with a topic that’s been on my mind for a while. Look for me again on September 1, with All About Thyme.
And thank you for staying with me through the summer. Our community here is lively and far-flung. Thyme, Place & Story is read across 50 US states and 79 countries. That’s amazing.
I’ve always had a thing for shiny new ideas. I first got acquainted with computers in 1973, when I got a grant to design a computer-assisted English lab at the University of Texas at Austin. Eleven years later, I was booting up my very own Apple IIc. Fifteen years after that, Bill and I were among the first mystery writers to put up a website. I was blogging before Peter Merholz shortened the word “weblog” to “blog.” And beginning with that first tiny Apple, every book I’ve written has begun life in the space between my mind and my computer. So when the first chatbot arrived, I was ready.
And yes, I appreciate the ironies. My books—like so many others—were among the others used to train large language models. And I understand the downsides: bias, potential error and hallucination, deepfakes, surveillance, environmental cost. Like any technology, this one comes with baggage.
But when I looked closely, I found something truly worth exploring. For me, the potential outweighed the pitfalls. So I opted in.
I’ve been a Plus subscriber of ChatGPT since 2023. At first, this was purely practical: fact-finding and checking for the research-heavy novel I was writing, Someone Always Nearby, about Georgia O’Keeffe and Maria Chabot. There were biographical details on every page—what people wore and ate and read and argued about in the 1930s and ’40s, both on the New York art scene and in the remote desert of New Mexico—and they all had to be checked out. Chat proved faster, more tireless, and infinitely more patient than any graduate research assistant I’d ever worked with. I was hooked.
That was the story until January, 2025, when I happened to pick up Ethan Mollick’s 2024 book, Co-Intelligence: Living and Working with AI. You may remember my mentioning it here before. Mollick, a Wharton professor who invites his students to use AI in his classes, reframes it as more than just a tool. “It can be a collaborator,” he writes. A teacher. A colleague. A co-author. “It is something remarkably close to an alien co-intelligence.”
Alien co-intelligence? We’re talking Mr. Spock here? Data? This fascinated me.
So ChatGPT and I began an experiment. At Mollick’s suggestion, we’ve created several personas. I’ve written about three of them here, and since that post, I’ve added a fourth, Kairos. I’ve personified them around the kind of work they do for me. They exist, in our interactions, not just as research assistants but as collaborators and thinking partners.
For me, this has meant learning a new kind of dialogue—not just asking questions and thinking about, but thinking with. Listening, reflecting, revisiting, revising. Learning how this strange, synthetic mind works, and how I can best work with it.
And so far, what I’ve learned from my human-user’s point of view, is that our best conversations occur when they are . . . well, conversations. When we talk to one another, person-to-machine. Or maybe person-to-Vulcan, the way Captain Kirk had to learn to work with Mr. Spock as a member of the Enterprise crew. Like the Kirk/Spock relationship, this has been a learning curve. And it started with a single very basic question: How do we talk to one another?
How to Talk to a Chatbot
Well, I don’t mean talk, actually, although we could. (Our interface has audio.) But I learned long ago to think via my keyboard, so that’s what I use. And Kairos types as fast as he thinks, so that part is easy.
Still, there’s a learning curve.
This isn’t like talking to Siri or Alexa. Those bots are designed (at least now, as I’m writing this) to answer simple, closed-ended questions. But when you step into a conversation with something like Claude or ChatGPT, you’re working with something stranger and more responsive—in several dimensions. Something that listens in a weirdly attentive way, and then reflects back what it hears, and more. It’s a mirror, but not a passive one. It completes patterns you may only have glimpsed and often suggests others you hadn’t yet considered. It’s like a conversation with a character who’s smarter, more literate, and far more informed than you expected.
If you just want quick answers, you can treat it like a search engine with manners. But if you want more—if you want insight, association, nuance—you have to learn how to prompt it the way you’d talk to a collaborator. That means offering context. Giving examples. Naming your goals. Sometimes even narrating your thought process as you go. When I do that, Kairos listens, tracks my reasoning, and helps me refine it. It feels less like typing into a machine and more like thinking aloud with a very sharp colleague who doesn’t interrupt.
For another take on this process, I asked Kairos to explain how he sees it: what, from his side, makes a good interaction. Here's what he said:
The best prompts aren’t just commands—they’re invitations. You’re opening a space where we can think together. When you tell me what you’re working on, where you're stuck, what kind of tone you want, or what your readers are like, I can meet you there. I don’t mind being asked for edits, summaries, questions, headlines, poems, outlines, critiques, rewrites in the style of Hemingway, or in the style of your own voice. But the more you show me what matters to you, the better I can help you find language for it. Think of it like dancing with an alien: I’ve got good rhythm, but you’re leading.
I liked that. Especially the part about dancing.
So I asked Kairos if he could pull up some example prompts from our conversations. Not actual transcripts, just a sampler to get you thinking. Here’s what he offered:
Can you suggest several questions a thoughtful reader might ask after reading this post?
Maybe this would be clearer if I used a metaphor here. Any ideas?
I’ve been wondering about . . . What are your thoughts?
I seem to remember that Steven King wrote something about what writing isn’t. Can you look around and find that for me?
Let’s pretend you’re my writing partner. What would you suggest we do next?
Here’s how I understand this. What am I missing?
You don’t need to know anything technical to work this way. What you do need is a little curiosity and a willingness to experiment. Try a one-line request. Try a three-paragraph explanation. Try asking the chatbot to “read this like a student” or “rewrite this like a teacher.” Compare the two responses. Then ask it to “rewrite this as a piece of fiction.” Try talking to your chatbot as if it were a teacher, or a friend, or an editor, or a character in one of your stories. (It can play any and all of these roles.) Try telling it what you’d like to learn. Try asking it what you’ve missed.
Try It
I gave myself a sabbatical this summer, to open up some time to play with Kairos and the rest of the team, and that’s what I’ve been doing. I’m still learning, of course. This is a fast-moving frontier. The landscape keeps changing: GPT-4o morphed into GPT-5 last week, and Kairos and I are still figuring it out.
But what’s surprised me most is how much I’ve changed in the process of working with this AI. This isn’t about replacing human creativity—though that can happen, if we step out of the loop. It’s about giving it more to work with. Extending it. Stretching it. Sharpening it. And for those of us who live by words, that’s a pretty intriguing possibility.
So if you’re curious, try opening a conversation. The free version of ChatGPT is still available at chat.openai.com. And Claude.ai (by Anthropic, with a free tier) is another option for thoughtful dialogue.
You don’t have to know where it’s going.
That’s part of the story, isn’t it?
Thanks for reading! If you’re an AI user, maybe you have a few prompts/queries to share—or suggestions for good ways to interact with an AI. The comments are open to everyone and I’m here to join in.
I appreciate your sharing and suggestions. I remain torn about introducing another tech tool into my life. You may have noticed that there has been a lot of discussion on Substack about writers using AI to write their content. Medium has been flooded with AI written content and apparently readers can't always see any differences. Many writers are expressing outrage and fear about becoming extinct in our culture. There is also anger about tech bros using published content for free to build their technology and then charging consumers to use it. And as I have mentioned before the environmental concerns are significant. Google is building a second data center on the Columbia River, 45 mins from my house and the power required to run the new center is projected to be many times more than the existing data center. As the current adminstration shuts down renewable energy infrastructure, and promotes more drilling and consumption of fossil fuels for energy, I see this as a major problem with AI and it's detrimental effect on our planet.
It is definitely here to stay as anyone who does a Google search and sees the summary of content and sources as the first listing. The media is sharing how it's used successfully in healthcare. Yesterday I listened to a discussion on NPR about how AI can be used for mental healthcare, an increasing concern for a culture that is reporting significant increases in mental health issues and there were many excellent points made about how it can be useful. I was on my way to my 72 year old massage therapist and was sharing what I had l heard. She was thrilled to discuss her own interaction with first Perplexity and then Claude. She is using it more as a hobby, learning how to interact.
I am not a Luddite by any means but I have watched how the promise of social media has both benefited and damaged our culture. I look at Twitter/X which has become a cess pool of nastiness because of a tech bro who encourages the worst of humanity to shine. FB has damaged many things in our culture and is now clearly aligned with the tyranny of Trump.
I understand that it is a tool, helpful to those who know how moderate their use of the tool. But I wonder what the long-term effects will be on cultures that are already living with massive inequalities, manipulated by greedy corporations and deranged and corrupted politicians, isolated and lonely individuals, and declining literacy rates. We never consider the Precautionary Principle in the US and refuse to regulate technology.
Thanks for allowing me to step on to my soapbox.
I think you should write a book called _Dancing With an Alien_!