Does AI pose a threat to writers? Stupid people misusing AI definitely do.
I’m not a natural techie at
all. In fact, contrary to popular misconceptions about autistic people, I’m
actually fairly averse to new technology. I’ve only recently started using a Bluetooth
speaker, for example. So, my recently developing interest in Artificial
Intelligence (AI) has come as a bit of a surprise to me.
![]() |
| AI generated image |
Since finishing my master’s
degree, I’ve been looking for ways to earn money online, and during my searching
I came across YouTube videos of people claiming that they had made money by
using AI to write ebooks. Feeling intrigued, I started experimenting with Chat
GPT to see what it was capable of.
My first impression was that the
writing it produced was much better than I’d expected, and I started thinking,
oh dear, this doesn’t bode well for us writers. On closer inspection though, I
relaxed. AI writing is repetitive and rather bland, and furthermore, it can ‘hallucinate’
facts and sources of information. For example, I asked Chat GPT to give me a
reading list of books about British saltmarshes, and from the list of book
titles and authors it presented, only one actually exists – the others were
completely made up.
Still, I came away from my experiments
feeling that AI could be a useful tool for writers. It can prompt ideas. For
example, it can break down a book idea into possible chapter headings, and it can
answer some kinds of questions more easily than a straightforward search engine,
providing that you take any ‘facts’ it gives you with a pinch of salt. I
certainly wasn’t concerned at that point that AI posed any real threat to professional
writers.
However, as I searched more
for online writing jobs, I started coming across some troubling Reddit threads. Many freelance content writers were noticing the same
thing – it’s becoming harder and harder to find writing gigs online. This
reflected my own experience - I used to work as an online content writer around ten
years ago, but things definitely seem different today. The opportunities seem
fewer and farther between. Not only this, but a strange new phenomenon is beginning.
Writing clients are starting
to use ‘AI detectors’ – software designed to identify text written by AI. The
problem is, these detectors are flagging up writing that was most definitely
written by humans. One writer described how they’d been working for a
particular client for ten years, yet recently the client had started using an
AI detector. The detector started flagging up writing from multiple writers on
the team, and when those writers said that they hadn’t used AI, rather than
believing them, the client doubled down and continued to accuse them.
Subsequently, many of those writers are now leaving.
This seems so ridiculously
stupid to me. But given how much faith some people invest in technology, I can
well believe it’s true. You’d think that having worked with people for years,
you would 1) trust them more, and 2) recognise whether there had been a change
in the quality of their writing. Obviously, when they started writing for you
years ago they would not have been using AI then, so unless there’s been a deterioration
in quality, there’d be no reason to trust the AI detector over them.
I don’t really know anything
about AI detectors or how they work, but simply on an intuitive level it seems
untrustworthy. After all, AI has learnt how to write from humans, so to then
try and judge whether human writing sounds like AI writing seems like an
oxymoron, surely?
I decided to test a free AI
detector using a couple of paragraphs from this blog post. This was the result
(LOL!)
![]() |
| Hmph! |
You might be thinking, oh well, this seems more of a problem for content writers, not creative writers. Yet, I'm already noticing that magazines and journals are starting to stipulate in their submission guidelines that they won't accept AI generated or assisted work. Now, how are they determining whether a piece of creative work has been assisted by AI? Will creative journals also turn to AI detectors?
Anyway, the conclusion I take
away from all this is that the main threat posed by AI is not the AI itself. AI
can be useful when used properly, and I would say that text generators like Chat
GPT could be particularly useful for people with communication disabilities. Like
any technology, the real risks develop in relation to how people use it. The
dangers of AI come from people abdicating their responsibility to practice
their own intelligence and engage their thinking skills, instead relying on AI
to think for them. After all, AI detectors are themselves AI, right?
It seems too many people are focussing on the ‘I’ in AI, rather than on the ‘A’.
Have you used AI creatively? Have you ever run into any AI-related problems? Let me know in the comments, I'm very interested to know!

