Is Apple Intelligence Making Up Words Now?
As powerful as LLMs can be, all have one shared weakness: hallucination . For reasons beyond our understanding, AI models have a habit of making things up, totally out of the blue. A response might be accurate, with well-cited sources and relevant information; then, all of a sudden, the AI pushes a false claim, or mistakenly interprets an ironic forum comment as fact. (That's how you end up with Google's AI Overviews recommending adding glue to your pizza .) Some LLMs may hallucinate less than others, but none are immune. That's why anytime you use a chatbot, you'll see some kind of warning on-screen, letting you know that the AI can make mistakes. Apple Intelligence, Apple's AI platform, is no exception here. When the company first rolled out its AI, it included notification summaries as a "perk." Apple had to quickly backtrack, however, once the feature started incorrectly summarizing news alerts— such as in one case , when Apple Intelligence condense...