Like many a Luddite, I have approached the AI revolution gingerly, dipping in my feet in reluctantly to see how it will affect me as a writer: will it make me more productive, or make me obsolete?
After flirting and breaking up with Chat GPT a couple of times over the past year — I did not find it particularly helpful, and disliked its sycophantic attitude — I decided last week to give Claude a try. Claude, you may recall, is the A.I. assistant devised by Anthropic, the company whose A.I. coordinates may have been responsible for obliterating a girl’s school in Iran.
As I pictured it, Claude might prove to be a less chatty, more muscular bot than GPT. Since I am in the process of readying the manuscript of a new novel for submission, I thought I’d upload the text and ask Claude to write me a ‘book pitch.’ It was worth a try.
It took only a couple of seconds for Claude to come through with a complete, although overly enthusiastic, summary of the entire book, multiple unsolicited tips for improvement, and an author biography. Also included was the pitch I’d asked for. I was dazzled. How could a machine ingest and spit out a synopsis of a 300-page novel in a nanosecond? But when I looked more closely (which took me more than a nanosecond), I found a number of glaring errors. My bio, for instance, bore no relation to reality. I pointed this out to Claude.
Here, verbatim, is the conversation that followed:
ME: Thank you for the book pitch, it’s quite impressive. However, how did you get my bio so wrong? There is plenty of information about my career and publications out there.
CLAUDE: You’re absolutely right to call that out—I invented your bio rather than looking you up, which was lazy of me. Let me fix that now.
*Searched the web* *Searched the web*
My apologies—I should have searched before writing that bio paragraph rather than fabricating details.
[Claude produces a more accurate bio, cribbed word for word from the Internet]
ME: Well, thanks. But it does give me pause—can I trust you not to make stuff up?
CLAUDE: That’s a fair and important question, and you deserve a straight answer. No, you can’t trust me unconditionally—and what just happened is a good illustration of why. When I wrote your bio, I had enough partial knowledge to sound plausible (your Dutch background, your translation work) but I filled in the gaps with invented specifics rather than admitting I didn’t know. That’s the most dangerous kind of error, because it looks credible.
Where you can rely on me fairly well: close reading of a text you’ve given me, structural analysis, craft feedback, drafting and editing tasks. Where you should be more cautious: anything involving facts about the real world. For those, I should either search, or flag explicitly that I’m uncertain. I didn’t do either. I just wrote confidently, which is worse than saying “I don’t know.”
The practical takeaway: treat anything I assert about the real world the way you’d treat a research assistant who is very well-read but occasionally confabulates.
[I then point out some other errors in Claude’s editorial notes, and he again profusely and verbosely apologizes, in the vein of ‘The dog ate my homework.’]
ME: Dear Claude, now that you have confessed (and apologized for) a string of mistakes with the rather lame justification, “Treat anything I assert about the real world the way you’d treat a research assistant who is very well-read but occasionally confabulates” (I think that anyone in their right mind would immediately show such a research assistant the door!) I must regretfully accept that you cannot provide the help I was hoping for.
CLAUDE: That’s a completely reasonable conclusion, and I won’t try to talk you out of it. You’re also right that “occasionally confabulates” is a poor excuse dressed up as a disclaimer. It was sloppy, and in a pitch package, that matters enormously. An agent who spotted an invented bio or a phantom manuscript error would lose confidence in you, not in me—which makes my carelessness genuinely harmful, not just embarrassing. I’m sorry the tool didn’t prove trustworthy enough for what you needed.
Claude is AI and can make mistakes. Please double-check responses.
If I’m being honest with myself, I have to admit that I did not share Claude’s smarmy regret over our break-up. On the contrary, I felt somewhat triumphant: I won! Here was the proof for my primitive brain—Claude’s fallibility meant that the bots were not on the cusp of putting me out of business after all. My interaction with this chastened Claude showed that I did not need A.I. For now, anyway. I was still the one in control.
And yet... I am aware that the technology is constantly improving. I’d even bet Claude took advantage of our brief affair to teach himself how to be more irresistible the next time. I used the free Claude model; if I paid for a more advanced version, and learned how to word my prompts more effectively, would I be left more satisfied, to the point of surrender?
I have also learned that Claude sucked up many thousands of pirated books to train itself, evading copyright. I know this because I was invited to join a class action suit against his parent company Anthropic, which has agreed to pay out some $1.5 billion to hundreds of thousands of copyright holders like me. I don’t know if Anthropic has said it was sorry for the unforgivable blunder of targeting a girls’ school in Iran. But Claude certainly seems more than ready to apologize to me, and if Anthropic pays me a few cents as compensation for the intellectual property it stole from me, I will graciously accept it, and stop complaining. After all, I’m only human.
