The short and disastrous history of AI content
Why do media and marketing companies keep falling for the delusion that replacing creative people with AI results in something better?
It’s hard to believe now, but BuzzFeed (which may soon file for bankruptcy) used to be praised for its vision.
Founded in New York City in 2006 by Jonah Peretti and John S. Johnson III, BuzzFeed grew fast with attention-grabbing content, including viral “listicles” and quizzes.
The publication was initially slammed for frivolous content, but it made money. In 2011, it used some of that money to hire Ben Smith to create a 250-person news room that did quality global investigative journalism. BuzzFeed even won a Pulitzer Prize in 2021.
BuzzFeed operated like a tech startup, raising $497 million across 12 funding rounds, including a $200 million investment from NBCUniversal in 2015. During its heyday, BuzzFeed acquired seven companies, including the ad startup Kingfish Labs in 2012, the data firm Torando Labs in October 2014, the news site HuffPost in November 2020, and the entertainment brand Complex Networks in June 2021.
The company grew to enormous heights, with a peak annual revenue in 2022 of $436 million.
But in the following two years, revenue dropped to $210 million in 2023 before falling to $170 million in 2024. (Spoiler alert: It’s foolish to assume online ads will keep growing.)
The company had a big money problem. And their “solution” was to replace people with AI.
They fully embraced what I call AI Superiority Speciousness, or ASS.
BuzzFeed leadership laid off 180 people, then in early 2023 started using AI for quizzes and personalized content and for writing some articles. Readers hated the AI slop and abandoned the platform in huge numbers, destroying the company’s reputation.
Last year, the company reported a net loss of over $57 million. And now, they’re considering bankruptcy.
Failure to learn
It’s a story as old as ChatGPT itself. Companies fall under the spell of AI, think it’s a solution, then fall on their ASS.
CNET used AI to create financial content. The AI blatantly plagiarized content from not only other publications, but from human CNET staff.
Video game outsourcing and support company Keywords Studios tried to create an entire video game using AI instead of human game developers. The project failed.
The Chicago Sun-Times newspaper published a summer guide by writer Marco Buscaglia, which was provided by the media unit King Features Syndicate. Buscaglia admitted he used AI to generate his recommended reading lists, which recommended at least 10 fake books attributed to real authors. Though the Times merely bought the content, the paper’s reputation was damaged.
Developers of Call of Duty angered players by using AI for the creation of in-game assets. The AI slop in the game was obvious. For example, some characters had the wrong number of fingers.
Spotify launched an “AI DJ” instead of relying on human music curators. Some users reported extreme frustration because the AI repeated the same tracks, ignored user genre preferences, and gave robotic, annoying commentary.
Sports Illustrated got caught publishing articles written by AI falsely bylined with fake authors, fake biographies and AI-generated headshots, severely damaging the magazine’s reputation.
Microsoft fired dozens of journalists and replaced them with AI for its MSN news page. The AI created dangerously false and offensive information, for example recommending that visitors to Ottawa enjoy a meal at a local food bank and headlining an obituary of an NBA player by saying he was “useless” instead of “dead.”
Game publisher Electronic Arts forced developers to use AI for game code. The AI created so many errors that it sharply increased the workload and burnout of staff.
Apple used AI to summarize news stories for lock screen push notifications. The BBC complained about Apple’s bad summaries, and a separate BBC analysis found that more than half of AI-based summaries had problems of some kind and that nearly 20% introduced totally false information.
An Ars Technica reporter used ChatGPT to generate quotes, which were false. The publication fired the reporter and apologized to readers.
The largest newspaper chain in the United States, called Gannett, used an AI-based service for sports stories instead of hiring local reporters. Readers mocked the stories for their bizarre writing style.
This list represents a tiny fraction of the stories where companies experience ASS.
Watch your ASS
There’s a central, unresolved mystery at the center of these stories. Which is: Knowing that so many companies fail so miserably with AI, and have their reputations severely damaged, why do companies keep doing it?
Here’s why.
You’re reading the free version of Machine Society. The paid version, which costs $5 per month or $50 per year, has full content. If you can, please support independent journalism in general, and this independent journalist in particular, by becoming a paid subscriber! If you can’t pay, you can still get the paid version by helping me promote my work. It’s easy! Go here for details.
More from Mike
Machine Society, The Attachment Economy, Computerworld, Superintelligent, TWiT, blog, The Gastronomad Experience, Book, Bluesky, Reddit, Notes, Mastodon, Threads, X, Instagram, Flickr, Facebook, and Linkedin!





