The irony of AI checkers

by Vivienne Pearson
30 May 2024

Have you had an editor or content manager question (or outright reject) your original work after putting it through various AI checkers? You’re not alone.

Artificial Intelligence has been a big topic in writer circles since the launch of Open AI’s ChatGPT in late-2022 and a current zeitgeist issue for writers is AI checkers (also known as AI-detectors or AI content detectors).

Mary Nguyen, a writer, digital marketer and Rachel’s List Gold Group member, has recently been on the pointy end of AI checkers. She posted:

“Just had a client get back to me that 30% of a piece I had submitted had been flagged as written by AI. Starting to get a bit paranoid that my writing style is either too robotic or I’m paraphrasing existing content too closely. Has this happened to anyone else? How did you respond?”

On reading this, I instantly thought of a LinkedIn post by another writer. Stephanie Wood, an award-winning freelance writer and journalist, posted in response to being offered consultation work, paid at a pittance rate, to train AI systems. Her post (slightly edited for brevity) reads:

“How many insults can be added to injury? The profession of journalism is in tatters already, AI is doing its best to take down what there is left of it, and I am being offered up to $25 an hour to help train AI models. If it wasn’t so offensive it would be funny.”

Between these two experiences lies the irony of this stage of AI.

🫱 On the one hand, writers are being used to train AI to make it more accurate and to sound more like a real human.

🫲 On the other hand, writers are being accused of using AI because checking programs are picking up similarities between writer-trained AI and real writing.

Can you see the issue here?

Rachel’s List Gold Group member, Fiona McNeill can, as she responded to Mary’s post by saying:

“AI has been ‘taught’ what quality writing looks like by looking at content written, of course, by humans. So it produces content that looks like that produced by a quality writer.”

With the global nature of AI, this is far from just an Australian problem. In Content Byte 2023 Keynote Speaker, Jennifer Goforth Gregory’s Facebook group, The Freelance Content Marketing Writer, a writer asked for advice about which AI checker to use for her own writing.

Chanel Coetzee, a writer from Cape Town in South Africa, was hoping to find a different AI checker from the ones she’d used to test a trial-piece for a potential new client who had stipulated no AI assistance. The detectors she’d tried were showing possible AI writing even though she wrote the piece herself.

False-positives by AI checkers can result in writers feeling like they need to rewrite sections or pieces that are being flagged as AI-generated in order to get the score down to zero. As Gold Group member, Kate Holland shares:

“I spoke with a technical writer a while back who was told a piece of her writing was minimum 60% AI created. She didn’t use it at all and was horrified. And trying to rewrite it and ‘pass’ the human test wasn’t easy.”

What’s it like checking your own work for possible AI influence?

In the spirit of enquiry, I put a couple of my pieces through GPTZero, an AI checker that allows up to 5000 characters for free.

I’m not surprised the program was ‘highly confident’ that my first submitted piece was entirely human written. That’s because it was an opinion piece, recently published by Guardian Australia, about the demise of Bonza Airline, so the writing contained Australian colloquialisms as well as extensive personal experiences.

Even so, the piece was flagged it as having a 2 percent possibility of being AI-generated. Might even this tiny number be enough to cause concern to an inexperienced or jumpy editor?

I braced myself for a much higher percentage for the second piece, which was an article I wrote for a communications company on a generic topic. This client does appreciate a degree of ‘humanness’ but has edited out my occasional attempt to be more quirky and, as I’m not a subject matter expert, I source my information from the same articles that they’d like me to ‘take inspiration’ from.

I was shocked to see the analysis return exactly the same outcome as for the opinion piece: ‘highly confident’ of human generation with only a 2 percent possibility it was AI-generated. Really? Hmmmm.

How big a problem is AI-detection for writers and editors?

A poll of writers in Rachel’s List Gold Group of writers revealed that only 2 out of 68 respondents have had – at least to their knowledge – their worked checked for AI. Yet, whether because of simple curiosity or concern, 10 times that number said they’ve run their own work through a detector.

AI detection is also an issue for those responsible for commissioning writing. In a survey of editors and content managers in Rachel’s List Gold Group, only 1 out of 11 respondents said they had used AI-detector tools. A further 9 said they might in the future, including if they were dubious about a submitted piece of writing. Several noted they don’t consider these tools are accurate, so would use caution even if they did decide to try them out.

Interestingly, when Rachel Smith, founder of Rachel’s List, posted a similar poll on LinkedIn, she only received 2 responses despite her polls usually garnering a big response. I wonder whether there is a reluctance to even discuss AI use among commissioners of writing? Perhaps this is not surprising given how fraught and confusing this topic is currently.

Is this a now or forever issue?

I realise that this might be a ‘now’ problem. Given the rapid development of AI, the irony of writers being caught between having their work used to train AI will likely dissipate or resolve over time.

But I wanted to write about this current zeitgeist because it’s causing real pain. For Mary Nguyen, being accused of using AI when she hadn’t led her to questioning her whole future as a writer (though, fortunately, thanks to the encouragement and wisdom of Gold Group members, she’s feeling confident once again).

Some tips for navigating this ironic time

Here are some tips for how to minimise the chances of your words being flagged as written by AI.

They’re suggested by a combo of myself and Gold Group member, Fiona McNeill, who eloquently describes AI as ‘having a soul-less camaraderie, which is difficult to quantify but comes across as a quite good but pushy salesperson’:

  • Include even small amounts of ‘soul’ to counter that ‘soulless camaraderie’.
  • AI loves an extended, vacuous, generic intro so try not to waffle at the beginning of a piece of writing.
  • If there is scope to include more personal elements, make the stories as individual and specific as possible.
  • Ignore auto-suggestions (which are a basic version of AI) whenever possible. Find other words or expressions to what’s being suggested.
  • Keep spelling and wording as naturally Australian as possible (obviously this may not be possible or appropriate for international publications and clients).
  • Keep writing for the one client recognisable as ‘written by you’. Unless a new brief is wildly different and therefore requires a different tone or style, try to keep the ‘you’ in each project.

Don’t be afraid to speak up with clients, either

If you’re in the position of having your writing checked for AI, consider these approaches (with thanks to commenters in The Freelance Content Marketing Writer as well as the Gold Group member, Debbie Elkind, for some of these thoughts:

  • If you use AI for research or writing, consider the hints above to help separate out your final piece from the AI-generated draft.
  • Avoid clients that religiously trust AI detection tools (you might save yourself endless arguments and rounds of revision).
  • Find yourself with a client who uses AI-checkers against you? Try educating them about the variability and potential inaccuracy of these online tools, particularly for drier and more technical topics. You can:
    • Share this article.
    • Remind them that AI checkers are very new tools, so are effectively tech-infants.
    • Share the experience of Debbie Elkind who put the same piece of writing into three detectors only to have the results return as 100 percent AI | 100 percent human | 23 percent AI.
    • Share the experience of a Gold Group member (who has asked to not be named). After an AI-detector declared her original writing to have a 100 percent likelihood of having been AI generated, she asked ChatGPT to rewrite it. On re-testing the outcome, the same program judged it to be 100 percent human!
  • You can also bring the discussion back to quality, opinion and nuance. If an article is flagged as AI-generated, is that actually a problem? Does the client like the writing and does it meet the goal? If their briefs are inadvertently asking for AI-style writing (for example, generically-structured and blandly-written), what could be done to change this?

I’ll end this discussion with a hope (and a plea) that all writers, editors and content managers maintain their humanity by managing this current phase of AI development in as thoughtful and kind a way as possible.

Have you had to navigate tricky conversations with editors or clients who use highly inaccurate AI checkers on your filed work? We’d love to hear your experiences on this topic…

Vivienne Pearson
Find me!

Leave a Reply

Your email address will not be published. Required fields are marked *

*