The purpose is to erase and replace reality. It's going to be gay and retarded.
The answers on jewgle LLM are almost never correct or useful.
Stop using hallucination, AI can't hallucinate. That's a human trait. What LLMs do is rank the best possible match from statistical models. Yes, often they rank wrong answers high enough that they use the bad data. That's all it is. Can we agree to stop using wrong terms?
How do they rank these data?
Let's pretend an LLM has only two shapes: a circle and a square. It's then shown a triangle and is asked to match it against its model. It will compare it to what it has in its model. Very likely, the square will be mathematically closer than the circle. It will present the square. Of course that is wrong to us, but to the model, it's the best choice. It also doesn't understand the square is wrong since it has nothing else to compare it with. The score will be low since a triangle is not a square yet, the LLM will say it's square.
I understand your point but this is oversimplifying the issue here.
There are way more data about Ricky Gervais 4 decades old relationships than the TV Show he produced.
The AI chose to consider his fictional character’s life as real. The funny thing is that it uses his real name, but replaced his girlfriend real name with the fictional dead wife. This is why hallucination comes to mind. It doesn’t mean the AI is thinking, but behaves like a real delusional person who is hallucinating about things that aren’t real.
Kinda like baking a 6” cake for 3 hours. Ai is shit. Bring more of it on, I can’t wait till planes are crashing in the air due to ai