According to a new study by Columbia Journalism Review’s Center for Digital Journalism, popular AI search tools provide incorrect or misleading information more than 60% of the time when answering questions. This result is worrying because these tools not only undermine public trust in news reporting, but also cause publishers to face a double loss of traffic and revenue. The researchers tested eight generative AI chatbots, including ChatGPT, Perplexity, Gemini, and Grok, asking them to identify excerpts from 200 recent news articles. The results showed that more than 60% of the answers were wrong, and these chatbots often fabricated titles, did not cite articles, or quoted unauthorized content. Even if they could correctly point to the publisher, the links often pointed to invalid URLs, reprinted versions, or pages unrelated to the content. Disappointingly, these chatbots rarely expressed uncertainty, instead providing incorrect answers with inappropriate confidence. For example, ChatGPT provided 134 incorrect answers in 200 queries, but only expressed doubt 15 times. Even the paid versions of Perplexity Pro and Grok3 did not perform as well, with higher numbers of incorrect answers, despite their prices of $20 and $40 per month, respectively. In terms of content citation, multiple chatbots failed to follow publishers’ attempts to restrict content, and five chatbots even ignored the widely accepted standard of the Robot Exclusion Protocol. Perplexity correctly cited a National Geographic article even though the publisher restricted its crawler. Meanwhile, ChatGPT re-cited a paywalled USA Today article via unauthorized Yahoo News. In addition, many chatbots directed users to reprinted articles from platforms such as AOL or Yahoo, rather than the original source, even when they had reached a licensing agreement with the AI company. For example, Perplexity Pro cited a reprinted version of the Texas Forum without giving due credit. Grok3 and Gemini often invented URLs, with 154 of Grok3's 200 answers linking to error pages. The research highlights a growing crisis facing news organizations. More and more Americans are turning to AI tools as a source of information, but unlike Google, chatbots don’t drive traffic to websites. Instead, they summarize content without linking back, costing publishers advertising revenue. Danielle Coffey of the News Media Alliance warned that without controls on crawlers, publishers won’t be able to effectively “monetize valuable content or pay journalists’ salaries.” After the research team contacted OpenAI and Microsoft, they defended their practices but did not respond to specific findings. OpenAI said it "respects publisher preferences" and helps users "discover quality content," while Microsoft claimed it follows the "robots.txt" protocol. The researchers emphasized that incorrect citation practices are a systemic problem, not a phenomenon of individual tools. They called on AI companies to improve transparency, accuracy, and respect for publishers' rights. From AIbase |
<<: Sensor Tower: 2024 Global Mobile Publisher Revenue TOP50
>>: GSMA: European Mobile Economy Report 2025
For female friends who want to tighten their vagi...
Now more and more women are consulting about pain...
Breast hyperplasia is a particularly common femal...
Every woman wants to have a delicate and small fa...
As life is getting more tiring and more fulfillin...
Many female friends are afraid of gynecological d...
For many women, they may have some gynecological ...
During the postpartum confinement period, most ne...
Jindakening is a common medicine among us, and it...
There are many nerves on the soles of the feet. W...
Many women have a strong smell of vaginal dischar...
For normal pregnant women, it is very normal for ...
When women are breastfeeding, sometimes they have...
Women who have just given birth are important pro...
Do you know about high blood pressure? Is high bl...