|
Getting your Trinity Audio player ready...
|
Have we graduated from A.I.-generated stories about compassionate celebrities to A.I.-enhanced political commentary?

We’ve all seen those feel-good, A.I.-generated Facebook fables about aging classic rockers: Bruce Springsteen feeding the homeless. Steven Tyler building a doghouse for rescued animals. Tyler and Mick Jagger in an intimate visit to a hospitalized Phil Collins. Cloying descriptions like, “The room didn’t feel like a hospital anymore — it felt like the heart of music itself, still beating.”
Most of the “stories” come with weepy, A.I.-generated images.
On social media, users have been eating up these heart-tugging tales like squishy candy, reposting them by the tens of thousands, and making them go viral. Commenters obediently click “like” and express deep admiration for the selfless stars. Even some skeptics have suggested that it doesn’t matter whether or not the stories are A.I.-generated, because they’re “sweet.”
The stories are not sweet. They’re fake news — bald-faced lies designed to get clicks, drive traffic to social media profiles and web pages, and generate income for the creators. And now, it seems, there’s a new wrinkle in the world of A.I.-generated news — A.I.-enhanced political commentary.
THE FACEBOOK PROFILE of a user named Bruce Fanger (who writes as White Rose) has been issuing a slew of viral opinion pieces on current political events — and receiving big kudos and reposts from smart people with lofty academic degrees. Fanger, who boasts 29,000 followers, holds a leftist view of politics much like my own. I was recently directed to his page by a likeminded professor friend who was dubious of the authenticity of Fanger’s work. One of his most recent posts, from November 21, was about the Trump/Mamdani meeting that took place that same day. As of this writing, the essay has received 93,000 reactions, been reposted 64,000 times, and elicited more than 5,000 comments.
In the post, Fanger writes about what happens when you watch the full 30-minute meeting between Trump and Mamdani instead of the shorter snippets posted by most news outlets: “The edges soften. The masks slip. And you start to see the actual geometry of the interaction — where power sits, where insecurity leaks, where the tone changes, where the truth speaks by accident … This wasn’t a showdown. It wasn’t a humiliation. It wasn’t a triumph for either man. It was something far more revealing: a case study in how a bully behaves when he can’t rely on fear, and how a principled politician behaves when he refuses the role of the victim.”

To be clear, Fanger is not writing fake stories like those noxious celebrity fables. The Trump/Mamdani meeting was a consequential event in the news cycle. One follower, a political scientist and former school principal who reposted the essay, wrote, “Bruce Fanger’s analysis of a meeting between Donald Trump and New York City Mayor-Elect Mamdani focuses on the dynamics of power and the psychology of a bully. The central point of the analysis is that Trump’s demeanor shifts dramatically when he is not met with fear or deference.”
Others were generally impressed with Fanger’s work, but weren’t so thrilled with the melodrama. “Sounds like AI wrote the analysis,” a media and marketing executive wrote. “Either way, I am enlightened and in awe of the wisdom.” Another marketing executive was more pointed in his criticism: “You don’t need to use AI to write this stuff,” he commented, “but if you do, here’s a prompt tip: ‘rewrite with a 50% reduction in word count.’”
The Trump/Mamdani essay is hardly the only instance of Fanger’s use of purple prose in his analyses of current events. He’s prolific. This month alone, he’s posted pieces on numerous topics including:
- The Epstein files: “The truth was never hidden. It walked runways, strutted through pageants, smiled from magazine covers … The network that birthed Epstein’s operation was already humming in daylight — its glamour a disguise for coercion, its language a mask for power…”
- Recent Democratic victories: “Ten months into Donald Trump’s second term, the nation was restless, brittle, and watching itself in a cracked mirror. What happened last night wasn’t a political earthquake so much as a pulse check — and the heartbeat was irregular…”
- Majorie Taylor Greene’s beef with fellow Republicans: “She walked into a room she wasn’t supposed to enter, touched walls she wasn’t supposed to touch, and learned — fast — that the movement she helped build has a panic button, and someone slammed it.”
FOR ALL ITS THEATRICS, smart analysis, and thorough sourcing, the red flags scattered throughout Fanger’s breathless output are glaring. He posts up to two or three items a day, all filled with grand metaphors describing dramatic scenes. Curious about the work of this writer I’d never heard of before, I took a deep-dive into his page history, back to before the rise of easily accessible A.I. platforms. What I found was that Fanger’s writing has changed markedly over the past three years.
In a 2022 post, his voice was fairly natural and his prose straightforward. “The idea that love, sorrow, and death are intrinsically linked is one that can be difficult to wrap our minds around,” Fanger wrote about Buddhist-like meditation. “Yet, it is an essential piece of the human experience. When we look deeply at the core of these three states of being, we can see that there is an underlying common thread that unites them.”
In a March 2023 post, Fanger wrote, “I asked Bing AI what’s its opinion of Tucker Carlson,” and then he proceeded to quote the A.I. opinion. By 2024, Fanger had begun using facts and details in his work that his earlier posts lacked — suggesting (though not proving) that he was beginning to simply cut and paste A.I. into his posts without saying so.
More recently, he’s been extraordinarily prolific, and his posts have begun reading as artificially dramatic and emotional as some of those fake inspirational posts about aging celebrities. My suspicion was that Fanger’s recent posts were A.I.-generated. I popped a few of his paragraphs into a couple of A.I. detectors — Originality.ai and Copyleaks — and they agreed, both reporting a 100-percent likelihood that the pieces were written by A.I.

This doesn’t prove that Fanger is cutting and pasting A.I.-written essays. If a writer merely uses A.I. as an “editor,” those detectors will detect it. And that’s exactly what Fanger says he’s been doing. “Thanks for raising this — it’s a fair question, and it’s something every writer is going to have to sort out as AI becomes part of the toolbox,” Fanger replied when I reached out to him about my concerns. “I do use AI, but only the way I’ve used editors in the past: I bring the draft, the argument, the voice, and the ideas. The tool helps me catch clunky phrasing, tighten a line, or smooth out a transition I’ve stared at too long. It doesn’t write the pieces; it just helps me finish them.”
That’s certainly fair. Going forward, all of us will be using A.I. tools more and more. And we all are going to have to decide where we draw ethical lines. Hopefully, we’ll decide collectively and, in some way, codify the rules.
As for the shift in Fanger’s tone over the past three years, he said that he feels his subject matter — current political events — requires more dramatic rendering and says that his writing has simply improved to accommodate this. “The growth in style isn’t coming from AI — it’s coming from writing every day,” Fanger wrote in his message to me. “Three to four hours of steady practice changes your craft whether you like it or not. You find a sharper rhythm, a cleaner angle, a leaner metaphor. The stakes are higher, the subject is darker, and the writing follows.”
I asked Fanger if he felt that his work should include a disclosure about the use of A.I., and he said, “If I were feeding a blank screen to AI and letting it generate the whole thing, then yes — that should be labeled. But that’s not what’s happening. The thinking, structure, and point of view are mine.” He added, “I’m not opposed to transparency — we’re all going to need it going forward — but I’m not passing off machine-written work as my own.”
HOWEVER FANGER IS COMPOSING his political commentary, he made some valid points about major news outlets failing to deliver forceful stories about our current political environment. “Honestly, Mark, that’s the whole reason I launched the White Rose project in the first place,” he wrote. “Because the people who are supposed to be carrying the ball — the real journalists, the newsrooms with budgets, the platforms with reach — keep tiptoeing around the stories that actually matter.
“My writing style isn’t the story,” he continued. “How much some seventy-something washed-up academic uses or doesn’t use AI isn’t the story. The story is the thing nobody with a press badge will look straight at: Trump Model Management. I’ve been pounding that drum for months, and mainstream outlets still treat it like a rumor instead of the hard, documented record it is.”
He’s right, of course. But the parallel story that’s equally as important and impactful on our culture and our future is how we — as writers, journalists, bloggers, media outlets, and everyday citizens — will be using A.I. to disseminate information. Will our opinions, our commentary, our descriptions of events be our own, or will they be artificially enhanced?
So far, most media outlets are coming up with guidelines independently, each generally mandating that any use of A.I. requires “human oversight.” The Neiman Journalism Lab, whose stated mission is “an attempt to help journalism figure out its future in an Internet age,” has compiled a list of policies and plans from 21 newsrooms across the U.S. and abroad. One outlet, ANP, The Dutch News Agency, says that its journalists can use A.I. or similar systems to “support final editing, provided that a human is doing a final check afterwards.” The Guardian, on the other hand, is less clear, its guidelines stating simply that A.I. must be linked to a “specific benefit and the explicit permission of a senior editor.”
We’re already pretty far into the weeds for the rules to be so vague. So naturally, I decided to go straight to the source to find out if it’s already too late, posing this question to Google A.I.: “Have we waited too long to come up with a codified set of rules on the use of A.I. by journalists, essayists, and opinion writers?”
The answer: “While many journalism organizations have issued guidelines, the lack of comprehensive, legally binding, and universally adopted rules suggests the industry may be behind in managing the widespread and often undisclosed use of AI. The consensus is that industry-wide, codified rules are overdue.”
Well written!
— Jeff?
Excellent piece, Mark. So glad you called out those fake rocker stories. They are a blight.
I’m not an intellectual, journalist or anyone along those lines but reading about the way AI can edit or reshape wording to make an article more interesting, understandable or concise seems like a good thing as long as the ideas remain the same. But I’m wondering about the truth. Where does AI stand on reporting “the truth”? Is it capable of lying or stretching the truth? Can AI be counted on to be truthful?J