[ad_1]
A brand new investigation reveals that the most well-liked information app within the U.S. printed over three dozen inaccurate, AI-lifted, or AI-bylined tales prior to now three years — with real-world results.
NewsBreak, the most popular information app within the U.S., advertises itself as a neighborhood information supply. It tops the Google Play retailer, with over 50 million downloads, and dominates the Apple App Retailer information charts, outperformed solely by X and Reddit.
The app solely operates within the U.S. and works as an aggregator, pooling information from completely different retailers, like Fox, Reuters, and CNN, onto one platform.
A Wednesday Reuters report discovered that NewsBreak used AI no less than 40 instances since 2021 to publish inaccurate tales, put up tales from different sources underneath faux bylines, and take content material from opponents.
For instance, two AI-based tales on NewsBreak incorrectly acknowledged that Pennsylvania-based charity Harvest912 was internet hosting a 24-hour well being clinic for the homeless.
“You might be doing HARM by publishing this misinformation – homeless folks will stroll to those venues to attend a clinic that’s not taking place,” Harvest912 wrote in a January electronic mail to NewsBreak.
One other electronic mail to NewsBreak from Colorado-based meals financial institution Meals to Energy detailed how NewsBreak incorrectly acknowledged when meals could be distributed three separate instances — in January, February, and March.
The meals financial institution needed to clarify the problem to individuals who confirmed up in response to the NewsBreak articles, and ship them house with out the meals they anticipated.
NewsBreak informed Reuters that it took down the 5 articles with inaccurate info.
Associated: Microsoft Replaced Its News Editors With AI. It’s Brought One Disaster After Another
With regards to AI instruments and pretend bylines, NewsBreak seems to have used 5 faux names as bylines for AI-generated repostings of tales from different websites.
Previous NewsBreak guide and former Wall Avenue Journal government editor Norm Pearlstine flagged the problem in a Might 2022 firm memo to NewsBreak CEO Jeff Zheng, writing “I can’t consider a sooner technique to destroy the NewsBreak model.”
Zheng responded to the memo, acknowledging the issue and asking the staff to repair it.
Associated: OpenAI Can Now Access Financial Times Articles to Train AI
NewsBreak is not the one information outlet dealing with scrutiny over AI content material. Bloomberg reported earlier this month that native San Francisco newspaper Hoodline was counting on AI to churn out tales — and, at one level, attributing these tales to distinctive AI personas full with their very own bios.
AI has additionally been identified to generate inaccurate content material. Information outlet CNET used AI to jot down over 70 articles final 12 months and needed to subject corrections for a lot of because of truth errors.
In the meantime, final week, Google announced “greater than a dozen technical enhancements” after customers discovered that AI overviews in its search engine gave some inaccurate solutions.
Associated: Google’s AI Overviews Are Already Getting Major Things Wrong
[ad_2]
Source link
