Wired and Enterprise Insider lately pulled a number of articles penned by a mysterious freelancer named Margaux Blanchard, after discovering they have been nearly definitely generated by AI—and full of fabricated characters and scenes.
That’s proper: what seemed like neat journal options turned out to be digital mirages.
The story first tickled suspicions when “Blanchard” pitched a story a few secretive Colorado city known as Gravemont.
Upon Googling, editors discovered it didn’t exist. She bypassed pay methods, demanded fee by way of examine or PayPal, and couldn’t show her identification.
Past Wired and Enterprise Insider, different retailers like Cone Journal, SFGate, and Bare Politics additionally revealed—however then swiftly deleted—her bylines.
Inside Wired, there’s a little bit of rueful awe. A pitch about digital weddings in Minecraft appeared so vividly Wired-esque that it sailed via editorial filters—till deeper digging revealed there was no “Jessica Hu” or digital officiant.
It’s much less “gotcha second” and extra “whoopsie-daisy”: “If anybody ought to be capable of catch an AI scammer,” Wired admitted, “it’s us.”
These embarrassments aren’t remoted. Tech writer CNET confronted related backlash when AI-written private finance tales became error-riddled dumpster fires, prompting a newsroom union rebellion demanding transparency.
It’s simple to mistake slick AI copy for real content material—till you attempt to confirm the main points.
All this raises huge questions: how did subtle AI idiot clear-thinking editors? Even AI-detection instruments failed to smell it out. It exhibits that these methods can produce real-sounding tales with zero accountability—a scary hole in journalistic protection traces.
My take? That is the digital equal of a Computer virus proper in your editorial inbox. Readers, editors, and tech must crew up on stronger verification routines—and perhaps a bit of wholesome skepticism isn’t such a nasty factor in any case.