A major journalism body has urged Apple to scrap its new generative AI feature after it created a misleading headline about the UnitedHealthcare CEO killing in the US.
The BBC made a complaint to the US tech giant after Apple Intelligence, which uses artificial intelligence to summarize and group together notifications, falsely created a headline about murder suspect Luigi Mangione. The incident highlights growing concerns about AI accuracy in news aggregation.
The AI-powered summary falsely made it appear BBC News had published an article claiming Mangione, the man accused of the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself when he hadn’t. This error has raised significant concerns about AI’s role in news distribution.
Reporters Without Borders has now called on Apple to remove the technology – but the firm has made no comment. The organization’s stance reflects mounting pressure on tech companies to ensure AI reliability.
Apple Intelligence was launched in the UK last week. The release came amid increasing integration of AI technology in news distribution platforms.
Reporters Without Borders, also known as RSF, said it was was “very concerned by the risks posed to media outlets” by AI tools. The organization has been monitoring AI’s impact on journalism since 2021.
The group said the BBC incident proves “generative AI services are still too immature to produce reliable information for the public”. Studies show that AI-generated summaries can have an error rate of up to 15% in complex news stories.
“AIs are probability machines, and facts can’t be decided by a roll of the dice,” Vincent Berthier, head of the RSF’s technology and journalism desk, added. “RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs.”
A spokesperson from the BBC said the corporation had contacted Apple “to raise this concern and fix the problem” after the headline issue. The BBC reaches over 468 million people worldwide weekly.
The notification which made a false claim about Mangione was otherwise accurate in its summaries about the overthrow of Bashar al-Assad’s regime in Syria and an update on South Korean President Yoon Suk Yeol. This partial accuracy has made the situation more complex.
The BBC has not yet confirmed if Apple has responded to its complaint. The incident has sparked discussions about AI accountability in news distribution.
Mangione has now been charged with first-degree murder in the killing of Mr Thompson. The case has garnered significant media attention across multiple platforms.
The BBC does not appear to be the only news publisher which has had headlines misrepresented by Apple’s new AI tech. Similar incidents have been reported by other major news organizations.
On 21 November, three articles from the New York Times were grouped together in one notification – with one part reading ‘Netanyahu arrested’, referring to the Israeli prime minister. The New York Times reaches over 100 million digital readers monthly.
It was inaccurately summarizing a report about the International Criminal Court issuing an arrest warrant for Netanyahu, rather than any reporting about him being arrested. This error has further fueled concerns about AI’s ability to accurately process complex news stories.
The controversy comes as AI integration in news media continues to expand, with global investment in AI news technologies expected to reach $2.5 billion by 2025. Media experts warn that without proper safeguards, such incidents could become more frequent and potentially damage public trust in news organizations.
The incident has also prompted discussions about the need for regulatory frameworks governing AI use in news distribution. Currently, there are no specific international standards for AI-powered news aggregation systems.