Medium Bans AI Content From Payment Program
Medium is enforcing new rules for AI content and images as of May 1st. I am grateful, but also concerned. Here's what you need to know.
On Wednesday Medium sent out an email saying as of May 1st, AI generated content is not permitted to be paywalled, does not qualify for Boost distribution and lack of compliance can result in being removed from the partner program.
Here’s a snippet…
Beginning May 1, 2024, stories with AI-generated writing (disclosed as such or not) are not allowed to be paywalled as part of our Partner Program.
Accounts that have fully AI-generated writing behind the paywall may have those stories removed from the paywall, and/or have their Partner Program enrollment revoked.
The email asks writers to remove all AI generated content from behind the paywall before May 1 to stay compliant. AI images don’t have to be removed but must be clearly marked as such. It also links to Medium’s updated AI policy.
I went to read the new policy. Here’s how it begins:
Medium is for human storytelling, not AI generated writing.
There’s a quote often attributed to Hemingway that says writing is easy, you just open a vein and bleed. Hemingway didn’t say that. It was actually Paul Gallico, author of The Poseidon Adventure and what he said was “It is only when you open your veins and bleed onto the page a little that you establish contact with your reader.”
In that context, it’s dead on right. Easy reading is hard writing and kudos to Medium for taking a stand for writers. It’s frustrating to think I spend hours on a post and right next to my labor of love is some piece cranked out in ChatGPT in minutes flat.
But. That said, I have concerns.
Concern #1: Disclosed or not…
The new policy says “Beginning May 1, 2024, stories with AI-generated writing (disclosed as such or not) are not allowed to be paywalled.”
It’s the “disclosed or not” part that concerns me.
Last May I wrote about freelance writers who are having pay withheld or getting fired for using AI—when they didn’t. AI detection tools suck. Open AI created ChatGPT but their own AI detection can’t tell AI content from content written by humans.
Medium featured a story (in their staff picks) that shines a bright light on the problem. Three different Chat Bots and one human wrote a poem. Then 38 AI experts and 39 English experts were asked which is which. The AI experts failed abysmally. 11 English experts and 3 AI experts guessed which are AI and which are human correctly.
Turns out people who are experts at the English language can identify AI far better than any “AI” expert. If AI experts can’t tell AI writing from human writing, how are they supposed to program their software to do it correctly?
Short answer. They can’t. In the piece I wrote last May, I texted 6 different AI detection tools and they scored very (very!) poorly at telling human from AI content.
***
Concern #2: You know where AI learned to write, yeah?
It’s technically incorrect to say AI is writing. It’s not. AI is a probability generator. AI content generators are computer programs that were fed swaths of text generated by humans. When given a prompt, they calculate the probability of words appearing next to each other based on the content fed to them. They take words humans wrote and create unique mashups based on everything fed to them.
So basically, AI replicates humans, but if we write like the content that AI is trained on, we get flagged as AI. Ouroboros. Snake eating it’s tail.
There’s another factor. Some types of content are at higher risk. When I was testing AI detection tools, my personal essays always passed with flying colors. No AI is going to add an anecdote about their father or sister. You know? But. AI detection flagged one of my history posts. Said it was 26% AI. It wasn’t.
Would a post like that get flagged? Get me booted from the partner program? No idea. What percent is required to get a writer booted? No idea. They didn’t say. Things like that concern me. As an editor, it changes how I look at submissions, too.
***
Concern #3: Margin of error and repercussion…
In late March, Medium banned a bunch of accounts for fraudulent activity. There’s a ton of fraud on Medium. People impersonating writers, people setting up multiple accounts to inflate earnings, inauthentic engagement like people clapping and commenting on posts they haven’t read to inflate engagement stats.
So they whacked a bunch. 1.7% of accounts were suspended in one fell swoop. Which doesn’t sound like much until you do the math. Medium has over a million paid members. Roughly 17K accounts were suspended. Poof. Done.
Most suspended accounts didn’t reply. They knew what they were doing was against the policies. But a whole bunch of people reached out and said but no, wait, I’m a real person, I wasn’t trying to break the rules. Medium reinstated those accounts. Said they’re not trying to crack down on writers. Just spam and scams.
But that didn’t involve money. Now we’re talking money.
Let’s say Medium flags content as undisclosed AI. Removes it from the paywall. That leaves more money in the coffers for you and me. But what if some of the writers reach out and say no, that wasn’t AI generated. Then what? So sorry, tough luck? Or will they reinstate the piece and the writer, put it back in the paywall? And if they do, what about the pay that was distributed incorrectly? No idea there either.
***
I don’t have any answers. It’s a mess. In an ideal world, people wouldn’t try to pass off AI content as their own writing. They’d just disclose. But now they’re even less likely to do so, because it disqualifies them from getting paid. In an ideal world, software would be able to do what they say it does.
But it’s not an ideal world. Some legit writers will get flagged as AI. Some AI will pass the test and squeak through. Don’t know how it will unfold. Time will tell, I guess.
I’d love to know what you think.
P.S. You can read my pieces about ChatGPT and AI here, if you’re interested.
In case you missed these… :)
Dying to Live (Are you breathing just a little, calling it a life?)
Losing A Loved One To Alzheimer’s Feels Like Losing Them Twice
These are very valid concerns and this is a thoughtful essay! My expectation is that this is more of a mission statement and that the process to detect AI will be ongoing and subject to refinement. Medium has already been doing very important work to combat AI by using human curators in the Boost program. Recently, my wife submitted a paper for her Master's program. The professor ran it through some plagarism tool and it scored like a 4%. We asked him to clarify what that meant, and he said it was a confirmation that the paper hadn't been plagariazed (which we already knew). There is some danger in assigning numbers to things. People see a number like 4% and believe it means something, even if the "real" number for plagarism on that paper was ZERO! I use things like spellcheck, but I switched to painting my own featured images because I suspect there's a desire out there to see something human (even if it is rudimentary art compared to what some people can achieve). The inescapable reality is that writers need protections against AI writing. If Medium is at the forefront of this effort, I guess that makes us the guinea pigs and we have to afford them some due consideration. So, I guess it's right to be concerned that our work will be incorrectly flagged, but we have to accept that's the new normal and be prepared to stay calm and defend ourself it it happens. This process, as clumsy as it might be in the beginning, is preferable to having our work swallowed by a tidal wave of AI generated "writing." Thanks again for this lovely post!
I too think this is a step in the right direction. But, yes, you bring up some valid concerns.
My experience as a college instructor has left me so frustrated with all of it!! They added AI detection software to the plagiarism software almost exactly one year ago. They took it out last month. So many issues. The system of identifying AI was flawed. It potentially flagged any use of Grammarly as AI generated. Students caught onto the quickly and would say “I used Grammarly.” But, the biggest issue? There is no way to prove anything is AI generated. It definitely takes a human to look through the AI flags to determine whether AI was used or not. There are definitely hallmarks - no use of specifics or examples, lots of vague statements. Ultimately without being able to provide proof of the use of AI, there is little academic honesty departments can do.
I run everything through a free detection site anyway. I would think that anything under 30% is likely fine, but that’s me. Even if it shows as 30% down to 20%, I’m scrutinizing.