Cupertino, California – Apple’s synthetic intelligence-powered notification summarization characteristic on iPhones has come underneath scrutiny for producing inaccurate and deceptive information alerts, sparking considerations about its potential to unfold misinformation.
The difficulty got here to mild final week when the AI characteristic inaccurately summarized notifications from the BBC Information app. In a single case, it falsely claimed British darts participant Luke Littler had received the PDC World Darts Championship, a day earlier than the event’s precise remaining, which Littler went on to win. Hours later, one other notification incorrectly claimed that tennis legend Rafael Nadal had come out as homosexual.
The incidents have prompted criticism of Apple Intelligence, the tech large’s AI system, which is at present in beta. The BBC revealed that it had been urging Apple to handle the issue for over a month. In December, the broadcaster reported one other incident wherein the AI-generated headline falsely said that Luigi Mangione, a suspect within the homicide of UnitedHealthcare CEO Brian Thompson, had shot himself — an occasion that by no means occurred.
Apple instructed the BBC on Monday that it’s engaged on an replace to resolve the problem. The replace will embody a clarification to point when textual content displayed in notifications has been generated by Apple Intelligence, fairly than showing as if sourced instantly from information retailers.
“Apple Intelligence options are in beta, and we’re constantly making enhancements with the assistance of person suggestions,” the corporate mentioned in an announcement. Apple additionally inspired customers to report considerations in the event that they encounter surprising or inaccurate notifications.
The BBC will not be the one media organisation affected. In November, the characteristic incorrectly claimed Israeli Prime Minister Benjamin Netanyahu had been arrested. The error was flagged on Bluesky by Ken Schwencke, a senior editor at ProPublica.
Apple’s AI notification summaries intention to consolidate and rewrite information app notifications into temporary, digestible updates. Nonetheless, this has led to what consultants name “hallucinations” — cases the place AI generates false or deceptive info with unwarranted confidence.
Ben Wooden, chief analyst at CCS Insights, famous the broader challenges posed by generative AI know-how. “We’ve already seen quite a few examples of AI providers confidently telling mistruths, so-called ‘hallucinations.’ Apple’s try to compress content material into brief summaries has compounded the problem, creating misguided messages,” Wooden mentioned.
Apple’s rivals within the tech business are intently watching how the corporate addresses the problem. The corporate has promised a repair “within the coming weeks.”
Generative AI programs, like Apple’s, depend on giant language fashions skilled on huge datasets to generate responses. When unsure, these programs can nonetheless produce confidently inaccurate outcomes, additional fueling considerations about their reliability in dealing with delicate or factual info.
The incidents spotlight the dangers of deploying AI applied sciences for public-facing purposes with out satisfactory safeguards, and Apple faces mounting strain to revive belief in its AI-driven options.