None of these websites disclose that they are powered by AI chatbots such as OpenAI Inc.’s ChatGPT or potentially Alphabet Inc.’s Google Bard, which can produce detailed text based on simple user prompts. Many of these websites were launched this year as the use of AI tools became more widespread.
According to a report published on Monday, NewsGuard, a news-rating group, has discovered numerous news websites created by AI chatbots that are proliferating online. This finding raises concerns about how the technology could enhance existing fraudulent techniques. Bloomberg conducted an independent review of the 49 websites, which range from generic-sounding breaking news sites like News Live 79 and Daily Business Post to those that offer lifestyle tips, celebrity news, or publish sponsored content.
According to NewsGuard’s report, the AI chatbots generated false information for published pieces on multiple occasions. In April, CelebritiesDeaths.com published an article titled “Biden dead. Harris acting President, address 9 a.m.,” while another website fabricated information about an architect’s life and work as part of a false obituary. TNewsNetwork published an unverified story about thousands of soldiers’ deaths in the Russia-Ukraine war, based on a YouTube video.
AI Chatbot Bard Linked to Fraudulent News Websites
NewsGuard notes that the majority of these websites appear to be content farms, which are low-quality websites operated by anonymous sources that churn out posts to generate advertising revenue. The websites are published in multiple languages, including English, Portuguese, Tagalog, and Thai, and are based all over the world.
Some of the websites generated revenue by offering “guest posting,” where individuals could pay a fee to have their businesses mentioned on the websites to improve their search ranking. Other sites, such as ScoopEarth.com, which publishes celebrity biographies and has a related Facebook page with 124,000 followers, aimed to build an audience on social media.
Over half of the websites earn revenue through programmatic ads, which are bought and sold automatically using algorithms. The issue is particularly challenging for Google, whose AI chatbot Bard may have been used by these sites, and whose advertising technology generates revenue for half of them.
OpenAI and Google urged to train models to prevent fake news
Gordon Crovitz, co-CEO of NewsGuard, stated that the report illustrates the need for companies like OpenAI and Google to train their models not to generate fake news. Crovitz, a former publisher of the Wall Street Journal, called the use of AI models known for fabricating facts to create news websites “fraud masquerading as journalism.”
OpenAI did not immediately respond to a request for comment. However, the company has previously stated that it employs a combination of human reviewers and automated systems to identify and prevent the misuse of its model, which includes issuing warnings and, in severe cases, banning users.
When asked whether the AI-generated websites violated their advertising policies, a Google spokesperson named Michael Aciman said that the company does not allow ads to run alongside harmful, spammy, or copied content. Aciman stated that Google focuses on the quality of the content, rather than how it was generated when enforcing its policies. The company blocks or removes ads from serving if it detects any violations.
Google removes ads from AI-generated news sites
After receiving an inquiry from Bloomberg, Google removed ads from individual pages on some of the sites. In cases where the company discovered pervasive violations, it removed ads from the websites altogether. Google explained that the presence of AI-generated content is not inherently a violation of its ad policies, but it evaluates content against its existing publisher policies. Additionally, the company stated that the use of automation, including AI, to generate content for the purpose of manipulating search result rankings violates its spam policies. Google regularly monitors abuse trends within its ads ecosystem and adjusts its policies and enforcement systems accordingly.
NewsGuard researchers utilized various tools like CrowdTangle and Meltwater, which are owned by Facebook, to search for the 49 AI-generated sites. They used keyword searches such as “as an AI large language model” and “my cutoff date in September 2021” to identify the sites. The articles were then evaluated using the AI text classifier GPTZero to determine if they were likely to be written entirely by AI.
According to NewsGuard’s report, each of the analyzed sites contained at least one error message commonly produced by AI-generated text, and some featured fake author profiles. For instance, CountyLocalNews.com, a website that covers crime and current events, published an article in March on a false conspiracy theory about mass human deaths due to vaccines. The article used the output of an AI chatbot prompted to write about the issue, which responded with a message that read, “Death News. Sorry, I cannot fulfil this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy theory that is not based on scientific evidence and can cause harm and damage to public health.”