Re: recent boost
They say even a stopped clock is right twice a day, but I think it's the wrong frame to look at it.
Can a stopped clock tell you the time? Can you know when a stopped clock is right, without the use of another clock? Can it give you information about the current time?
LLMs are the same way, they are also right sometimes. Can you tell when? Without consulting another more reliable source of information?
@quirk @thomasfuchs even a working clock is only an approximation of reality
And as we understand it right now, that's still highly relativistic
what even is reality?
This isn't an argument in favour of LLMs or the use of the term "AI" to give anyone the remote impression of intelligence
It's rather a commentary on how deeply ingrained the marketing speak has been, for many many centuries, that we think of objects that do one thing as though they can do another
@quirk @thomasfuchs We are victims of our own over-simplifications (and highly susceptible to those who have something to sell)
In fact, I have a remedy for that! Just take this snake oil twice a day
An interruption on the way to the point. ( ! )
Very good point. At some moment the line will cross for everyone. AI (artificial bullshitting) will be seen as the idiot nephew who came along and cant stop shouting his ignorance to interupt actual discussion.
AI - Annoying Interruption
A stochastic clock is useful. Say, one that is correct 95% of the time. It's just less useful than a deterministic clock.
Your spam filters are barely right two thirds of the time, yet they save a ton of effort.
@AeonCypher true, and many clocks are never entirely right, if say they are off by a minute or two in one direction or the other, they have a bias but if the size of that bias is small enough it's still useful
I guess I just don't think LLM are right often enough, or close enough
It depends a ton on what purpose they are serving and how they are designed.
If you're trying to ask an LLM, with extra randomness, and no internal source retrieval, some basic facts. It's going to be trash.
If you use it to improve your spam filter it will be amazing at it.
SUCH a good analogy. I don't think it's been explored enough what a bad word/concept "hallucination" is when we talk about when these things "make a mistake"
@quirk Today I asked an LLM about some type checking stuff, trying to refactor some code. It did help a lot although half of what it said was wrong and it showed me one nicer solution sort of by accident. It was frustrating but still useful. Here the type checker showed me the truth. And googling some of these things is hard because I donβt know the terminology to use. So it can sometimes help for inspiration but definitely needs to be verified. That said, the last few times I have discussed things with ChatGPT I think it has been quite useless, even 4o. Just spewing text without information.