Google vs. ChatGPT
I started to use ChatGPT around maybe 3 months ago. Not that long before GPT4 was announced. I've been using it ever since. For reference on various topics, for argumentation (asking for counterarguments for example), to understand SQL queries, to get feedback on my writing.
Although I'm not even a beginner on Data Science or Machine Learning I do have a rough understanding of how these LLMs work.
They are basically mathematical formulas created to produce a sequence of words based on some input. They aren't focused so much oncorrectness/truth but rather on sounding correct.
An LLM isn't a data base of information but it does "know" things based on the data it was trained on.
Here's how GPT-4 explains it (so meta!)
the parameters allow GPT-4 to generate accurate and contextually relevant information, giving the impression that the model has knowledge. However, it's crucial to understand that this knowledge is a byproduct of the model's thorough training and ability to understand textual patterns, rather than the model actively possessing a database of information.
These parameters are the values that make up the mathematical formula that is the LLM.
Parameters in the context of GPT-4 and similar models refer to the numerical values within the artificial neural network that are adjusted during the training process. These parameters include weights and biases associated with neurons and connections in the network.
The large number of parameters in GPT-4 allows it to capture a vast array of patterns and associations in the text, which helps produce detailed and accurate responses, making it seem the model "knows" information. _ However, this knowledge is essentially the model's ability to recognize and generate relevant patterns based on its training..
That is why, whenever the model doesn't know something it comes up with stuff. It "hallucinates".
This tells me that from all the potential use cases, using an LLM as reference is not the best one.
In some situations it is trivial to fact check the information provided by the LLM. For example, questions about the Bible are easy to check, it's a matter of making sure the verse it quoted actually exists.
Or just by asking directly for references (book names, paper citations, etc.)
But not everything is that easy to fact check. And if I'm going to have to Google the responses to validate correctness, might as well directly Google on the first place.
So why do I, and probably many others as seen on Twitter and reported by friends, keep going back to ChatGPT for reference when Googling is probably the better choice when it comes to getting the correct answer?
Paradox of choice: Google gives the "raw" information, it still requires to parse it and process it. And it give an ambulance of information, not all of it relevant. Nobody has time to check the 2nd page of Google, we've stablished that years ago already. In fact I don't even know if there's a 2ds page of results anymore.
Chain of thoughts: since ChatGPT uses previous prompts and responses as part of it context it properly recreates the idea of a conversation that connects one thought after the other. Keeping the record at hand rather and a history of searches that one has to remember.
Tailored responses: while it does require good prompts, ChatGPT responses are custom made for the particular question. And there's probably a sense of participation, since the better the prompt the better the response. The ability to connect with precious prompts/responses and ask to connect seemingly unrelated topics, maybe even explaining one in terms of the other. Or to come up with examples on the spot.
Human-like understanding: one can ask questions like you would to other human, whereas with Google it requires to convert them into keywords that the search engine can use. The level of fuzziness is much higher.
I should point out that by asking for reference I mean more involved questions like "What does the Bible say about X?" Or "Explain trait safety in Rust"
There are probably more reasons, specially on the psychological front.
And maybe a good deal of novelty, maybe once it wears of we'd be back to Googling. But I don't think so, specially if LLMs get better.