Ask HN: Are LLMs just expensive search and scripting tools? Is it that simple?
Can all of LLMs be summarized as a (currently) really expensive search that allows you to express nuanced queries and script the output of the search? Why or why not?
Take code. In a sense, StackOverflow is about finding a code snippet for an already solved problem. Auto complete does the same kind of search in a sense.
Take generative text. In a sense that’s the equivalent of making a query and then aggregating the many results into one phrase. You could imagine the bot searching 1,000 websites and then taking the average of the answers to the query and then outputting the result.
Does every LLM use case fit the following pattern?…
query —-> LLM does its work —-> result —> script of result (optional)
No LLMs are not 'search'. Search as in google or a database query is deterministic. Are there results for x query? If there are we can return them. If there aren't we can't return anything.
LLMs do not work that way. LLMs do not have a conception of facts. Any query you make to an LLM has an output. The quality of that output depends on the training data. For high probability output you might think the LLM is returning the correct 'facts'. For low probability output you might think the LLM is hallucinating.
LLMs are not search. They are a fundamentally different thing from search. Most code is 100% deterministic. The program is executed exactly in order. LLMs are not 100% deterministic.
It's even simpler. It's just 1's and 0's.
You're being reductive to the point that you're saying "LLMs are an algorithm like auto complete/search engine, therefore they're the same."
That's not how it works. They're different approaches to how they handle the same inputs.
i would totally agree that they’re different approaches
i wouldn’t conclude “therefore they’re the same”. they’re clearly not the same
if it’s a different approach to search and scripting, does that not mean it is a kind of search and scripting?