With several other editors, I run a Medium publication called Prism & Pen. Our published policy is not to accept any article written with AI assistance. That said ...
We don't have a good way to know when that's happening. I mean, sometimes it's obvious. An article structured like a high-school theme paper with no deviations from standard grammar and punctuation rings an alarm bell. (Nobody's that perfect except a computer!)
Another tip-off is a sort of a lack of human heart. Mostly, we publish personal stories, and AI isn't very good at simulating that.
Something working in our favor is that we don't publish a lot of one-offs. Many of our writers stay with us for quite a period of time, so it becomes easier to know when they're writing from their hearts.
Still, we editors have spent some time debating whether particular pieces are AI or not. And I don't know how we could be reliably accurate about answering that question.
Here's a question for you. Why do you think AI detectors are accurate?
I mean, the only experience I have with AI is how bad it is. Google gives me AI generated answers when I ask it a question, and they're usually howlingly wrong in at least some particular.
A friend of mine asked a generative AI routine to write an article about me, and it said that I'm a columnist for the New York Times. I wish! Lol.
So it's not my experience that AI is good at doing anything except make very embarrassing and obvious errors of fact.
So why would an AI routine trained to detect AI not be equally as bad? Since AI can't do anything else right, why could it do this right?
But I certainly join you in expressing frustration with the number of people on Medium attempting to write with AI. This is something we should all work hard to put a stop to, but I'm just not sure how to be sure.