It's time for the myth that #AI can summarize things to die.
"Apple Intelligence notification summaries are honestly pretty bad
Summaries are often wrong, usually odd, sometimes funny, rarely helpful."
(Original title: Apple Intelligence notification summaries are honestly pretty bad)
arstechnica.com/apple/2024/11/β¦
https://tldr.nettime.org/@tante/113509505811866429
@tante This hasn't been my experience. I run a local 12B Mistral model and it does a great job summarizing things. Of course things break down eventually, but the context length has to get [β¦]
@tante
So in a nutshell:
[β¦]
@tante
The financial authority in Australia, ASIC, tested AI summarisation earlier this year. They were not very impressed after the pilot:
"It wasnβt misleading. It just really didnβt [β¦]
@tante At work, we have a feature where Slack summarizes threads across different channels. Itβs pretty good. Itβs just Appleβs quote-unquote βintelligenceβ that is so terrible.
@tante I think this is a case of bad input/training, i.e. another case of Gemini. ChatGPT can summarize things decently enough, so it's odd that such a simple AI from Apple would do so terribly.
2024-11-19 Beggarmidas β 1π¬
@tante all things start out rough. It's only after we invest considerable work in refining & iterization that they become more useful & accurate.
@tante I've never laughed this hard at AI before. It's just hilariously bad.
@tante
llm -> summaries based on some statistics over internet of words
human -> want summary based on meaning in that writting
@tante I never really understood this summarizing thing in the first place.
Summarizing means judging which information (or which details) are relevant for a certain audience/context and which [β¦]
@tante Someone here made a statement that AI generative models output is plagiarism. One correction: bad plagiarism.
ββββ
ββββ