Individuals are less likely to trust and share headlines labeled "AI-generated," regardless of whether the headlines are indeed correct or written by humans.
News consumers show a reluctance towards headlines produced by artificial intelligence, perceiving them as potentially unreliable. The extensive progress and accessibility of generative AI technologies have ignited discussions among policymakers and social digital platforms regarding the labeling of content produced by artificial intelligence. The impact of such labels was evaluated in two studies (1✔ ✔Trusted Source
People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation
Go to source).
‘AI is transforming healthcare by improving efficiency and personalizing patient care. It can analyze medical images for accurate disease detection, such as cancer, and accelerates drug discovery by processing large datasets. #artificialintelligence #medindia’
Sacha Altay and Fabrizio Gilardi conducted two preregistered online experiments among 4,976 US and UK participants to investigate the effect of labeling headlines as AI. Respondents rated 16 headlines that were either true, false, AI- or human-generated. In Study 1, participants were randomly assigned to conditions in which (i) no headline was labeled as AI, (ii) AI-generated headlines were labeled as AI, (iii) human-generated headlines were labeled as AI, and (iv) false headlines were labeled as false.
What is the Impact of Labeling Headlines as AI-Generated?
The results of Study 1 show that respondents rated headlines labeled as AI-generated as less accurate and were less willing to share them, regardless of whether the headlines were true or false, and regardless of whether the headlines were created by humans or AI. The effect of labeling headlines as AI-generated was three times smaller than labeling headlines as false.How do Participants Interpret the Definition of AI-Generated Headlines
In Study 2, the authors investigated the mechanisms behind this AI aversion by experimentally manipulating definitions of AI-generated headlines. The authors found that AI aversion is due to expectations that headlines labeled as AI-generated have been entirely written by AI with no human supervision. Despite wide support for the labeling of AI-generated content, the authors argue that transparency regarding the meaning of the labels is needed as the labels may have negative unintended consequences. According to the authors, to maximize impact, false AI-generated content should be labeled as false rather than solely as AI-generated.Reference:
- People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation - (https://academic.oup.com/pnasnexus/article/3/10/pgae403/7795946)
Advertisement