The discourse on “AI” systems, chat bots, “assistants” and “research helpers” is defined by a lot of future promises. Those systems are disfunctional or at least not working great right now but there’s the promise of things getting better in the future. Which is how we often perceive tech to work: Early versions might be […]
That’s something people really have to get into their heads: an “answer” by an LLM ist just a series of high probability tokens. It’s only us humans who interpret reason and value into it. From the system’s standpoint it’s just numbers without any meaning whatsoever. And no amount of massaging will change that. LLMs are about as “intelligent” as a fancy database query.
I disagree!