Now in the News: Most Readers Want Publishers To Label AI-Generated Articles — But Trust Outlets Less When They Do

December 15, 2023

Most Readers Want Publishers To Label AI-Generated Articles — But Trust Outlets Less When They Do

“We already expect quite a lot from the public in terms of media literacy to be able to navigate the contemporary information environment; the use of these technologies in news adds a whole other layer to that.”

An overwhelming majority of readers would like news publishers to tell them when AI has shaped the news coverage they’re seeing. But, new research finds, news outlets pay a price when they disclose using generative AI. That’s the conundrum at the heart of new research from University of Minnesota’s Benjamin Toff and Oxford Internet Institute’s Felix M. Simon. Their working paper “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news” is one of the first experiments to examine audience perceptions of AI-generated news.

More than three-quarters of U.S. adults think news articles written by AI would be “a bad thing.” But, from Sports Illustrated to Gannett, it’s clear that particular ship has sailed. Asking Google for information and getting AI-generated content back isn’t the future, it’s our present-day reality.

Much of the existing research on perceptions of AI in newsmaking has focused on algorithmic news recommendation, i.e. questions like how readers feel about robots choosing their headlines. Some have suggested news consumers may perceive AI-generated news as more fair and neutral owing to the “machine heuristic” in which people credit technology as operating without pesky things like human emotions or ulterior motives.

For this experiment, conducted in September 2023, participants read news articles of varying political content — ranging from a piece on the release of the “Barbie” film to coverage of an investigation into Hunter Biden. For some stories, the work was clearly labeled as AI-generated. Some of the AI-labeled articles were accompanied by a list of news reports used as sources.

A couple of limitations to note. The news articles shown to participants, though sourced from tech startup HeyWire AI that sells “actual AI-generated journalistic content,” ran under a mock news org name, and the lack of real-world implications and associations may affect results. The sample of nearly 1,500 people also skewed slightly more educated and more liberal than the U.S. public at large. (There’s a wide — and widening — partisan divide when it comes to trust in news media.) This is a working paper or pre-print, meaning the findings have not yet been peerreviewed. Co-author Toff has said the idea for this research came after he was asked about trust toward AI-generated news — and he didn’t know the answer. A few takeaways from the resulting experiment and a conversation with the co-authors:

Readers perceived news orgs publishing stories labeled as AI-generated as less trustworthy On an 11-point trust scale, survey respondents who saw the news stories labeled as AI-generated rated the mock news organization roughly half a point lower than those shown the article without the label — a statistically significant difference. The respondents, interestingly, did not evaluate the content of the news article labeled as AI-generated as less accurate or more biased. The researchers found the largest difference in trust among those who were familiar with “what legitimate news production and reporting entails.” People with lower levels of “procedural news knowledge,” as the researchers put it, generally did not dock the news orgs trust points for labeling content as AI-generated.

Those who distrust news media…still distrust news media powered by AI

There’s some hope that generative AI could increase trust among those with the lowest confidence in media. Given historically low trust in media among Republicans in the U.S., perhaps some audiences would see generative AI as an improvement over professional journalists? An earlier experiment found that presenting news as sourced from AI reduced the perception of bias among people holding the most hostile partisan attitudes toward media. More recently, an editor from a German digital news site that experimented with AI-assisted content said an audience survey suggested some readers seem to favor “the mechanical accuracy of technology” over the “the error-prone or ideologically shaped person.”

But co-authors Toff and Simon found no improvement in this experiment. Their research showed no changes from AI disclosures among the least trusting segments of the public. Future research could still explore whether different labels could build trust with certain segments of the public, Toff said in an email.

“I wonder if there are ways of describing how AI is used that actually offer audiences more assurances in the underlying information being reported perhaps by highlighting where there is broad agreement across a wide range of sources reporting the same information,” Toff said.

“I don’t think all audiences will inevitably see all uses of these technologies in newsrooms as a net negative,” he added, “and I am especially interested in whether there are ways of describing these applications that may actually be greeted positively as a reason to be more trusting rather than less.”

The bots fared better when they cited their sources

Increasing transparency has been a hallmark of many efforts to improve trust in journalism, from a “show your work” ethos to enhanced bylines. With AI tools still regularly spitting out misinformation and hallucinating sources, being given the opportunity to double check original source material is highly encouraged. Researchers found that when a list of sources was provided alongside the news article, labels disclosing the use of AI did not reduce trust. In other words, the “negative effects associated with perceived trustworthiness are largely counteracted when articles disclose the list of sources used to generate the content.”

What now?

Confirming previous studies, Toff and Simon found an overwhelming majority believed news organizations should “alert readers or viewers that AI was used” — more than 80% across all respondents. Among those who said they wanted to see a disclosure, 78% said news organizations “should provide an explanatory note describing how AI was used.”

The researchers also accepted open-ended responses from study participants, which resulted in practical suggestions to label AI-generated content (“a universally accepted symbol” or “industry-wide labels” similar to the “standard way nutrition information is displayed on food products”) and some statements of blanket disapproval (“or they could just not do this,” one wrote).

“While people often say they want transparency and disclosure about all kinds of editorial practices and policies, the likelihood that people will actually click through and read and engage with detailed explanations about the use of these tools and technologies is probably quite low,” Toff said.

The nutritional labels mentioned by one respondent might be instructive for thinking about what news consumers want. “People want companies to disclose what’s in their food even if 99% of the time they aren’t going to actually read through the ingredient list,” Toff noted.

It’s easy to forget it’s only been one year since ChatGPT was released and helped kickstart a seismic shift in the tech industry. Many in journalism — and in our audiences — are still getting to know the technology and perceptions, for better or worse, may evolve. (The researchers found, for example, that respondents who said they heard or read “a lot” about news organizations using generative AI were more likely to say they thought AI did a better job than humans in writing news articles.)

“Audiences are already often deeply skeptical if not downright cynical about what human journalists do (and a lot of news that people encounter in their social media feeds doesn’t give them much reason to feel otherwise). Inevitably as these tools become more widely used, newsrooms will need to grapple with how to effectively communicate what these technologies are and are not being used for, and we know so little about how to do that,” Toff said. “We already expect quite a lot from the public in terms of media literacy to be able to navigate the contemporary information environment; the use of these technologies in news adds a whole other layer to that, and neither newsrooms nor the public have a very well developed vocabulary to navigate that on either side.”

Simon stressed these early findings should not deter news organizations from setting up rules around the responsible use and disclosure of AI and noted that comparative work around disclosures is “well underway.” News organizations should consider where disclosure makes sense (when an article was largely written by AI, for example) and where it may not (when journalists used an AI-transcription tool to transcribe interviews to inform the story).

States Newsroom Launches North Dakota Monitor To Close Out Groundbreaking Year

Amy Dalrymple to lead States Newsroom’s North Dakota outlet

States Newsroom, the nation’s leading network of state-based nonprofit news outlets, has launched the North Dakota Monitor to provide free, high-quality, nonpartisan reporting on the important issues affecting the Peace Garden State. The Monitor will be the 39th news site in States Newsroom’s network and the ninth launched in 2023 alone. With the addition of its eight content-sharing agreements, States Newsroom is on track to have a presence in all 50 states by the middle of next year, elevating its status to a fully national network.

Veteran North Dakota journalist Amy Dalrymple will lead the Monitor’s newsroom as editor-in-chief. She has worked as a journalist in North Dakota for 20 years, most recently as editor of The Bismarck Tribune. Dalrymple reported from Williston for Forum News Service from 2012–2017 to cover North Dakota’s oil boom and also covered higher education and other topics for The Forum of Fargo-Moorhead. She has been involved in covering every North Dakota legislative session since 2009. Dalrymple will continue to be based in Bismarck.

“We are excited to close out 2023 by addressing the growing issue of news deserts in North Dakota,” said Chris Fitzsimon, director and publisher of States Newsroom. “Our goal is to make the North Dakota Monitor the go-to source for how decisions are being made by government officials in Bismarck and around the state. We know that Amy and her team will deliver on that mission by providing clear and honest reporting on the issues most critical to the Peace Garden State — all without a paywall.”

As part of its continued growth, States Newsroom announced in March that it was selected by The Pew Charitable Trusts to be the new home of Stateline — merging the two-state policy-focused organizations to expand their incisive reporting on state government around the country. States Newsroom also launched content-sharing partnerships with eight independent nonprofit outlets, including The Texas Tribune, to host reporting on News from the States, its comprehensive statehouse news site.

Last year, in a national study on statehouse reporting, Pew Research Center cited States Newsroom and other nonprofit newsrooms as key to filling the void in coverage left by staffing cuts at legacy media outlets. According to Pew, the overall percentage of reporters working for nonprofit newsrooms in the statehouse press corps has more than tripled since 2014 and now makes up the largest portion of statehouse reporters in 10 states and the second largest in 17 states.

States Newsroom is a 501(c)(3) nonprofit funded by the generous contributions of readers and philanthropists. States Newsroom is committed to supporting fact-based, non-partisan news to the public at no cost and ad-free.

Paywalled Content Is Deemed Higher Quality And More Trustworthy


• 30% of consumers believe that paid content is higher quality than free content, while 10% say free content is higher quality.

• 25% of consumers have more trust in publications that require a subscription, while 13% trust publications that are free to access more.

• Consumers believe publishers’ content is generally higher quality and more trustworthy when they have to pay to access it, according to research by Toolkits and National Research Group.

In a study of 1,007 U.S. consumers who have subscribed to digital publications, 30% of respondents said they believe content they must pay to access is higher quality than content that’s available for free, and 25% said they have more trust in publications that require a subscription. Ten percent said they view free content as higher quality and 13% said they trust free publications more.

Cause and effect The correlation between paywalled content and consumer perceptions of quality and trust is clear, but the factors driving those attitudes are open to interpretation. Several explanations are plausible, including:

Better business models: Publishers have gravitated towards subscription offerings and paid content access in recent years largely because they believe audience revenue can help fuel healthier and more sustainable businesses. That theory may be proving true, as challenges with ad-supported models mount and publishers with audience revenue components to their businesses fare better than those without. It stands to reason that healthy, stable, and sustainable media businesses might be likelier to produce high-quality and trustworthy content.

Improved user experience: Publishers that generate revenue directly from their audiences are often less reliant on advertising and can offer improved user experiences as a result. Some offer paid readers ad-free or ad-light experiences, while the data associated with logged-in or known users can often help boost ad relevancy – all of which can help boost consumer perception of quality and trust. Consumers may not dislike advertising in and of itself, but publishers frequently find that improvements in user experience correlate with increased perceptions of quality, trust and value.

Marketing and product positioning: Publishers that ask consumers to pay for content often go to great lengths to position it as higher quality and more trustworthy. Whether that’s true is subjective and difficult to quantify, but marketing messaging and product positioning alone might influence consumer perception, to some degree, regardless of the nature of the underlying content on offer.

Confirmation bias and behavioral psychology: Publishers increasingly find that subscription products and other audience revenue approaches enable them to more closely align their business needs with their editorial missions and the interests and needs of their audiences. While audience revenue might help underpin more viable business models, however, there’s also an argument to suggest it incentivizes publishers to pander to audiences by playing to their existing beliefs and biases. That, in turn, may influence perceptions of quality and trust. Elements of behavioral psychology come into play as well. Consumers often ascribe more value to products and services simply because they own them or because they’ve paid to access them, for example.