You know, when I said 'Just make some shit up' I was, like, ironically poking fun at Jordan for doing exactly that- HOW MUCH OF THIS DID YOU USE!? Like... You trained is on this but this says 'if you don't know a thing just make some shit up' and now you don't understand why it's making shit up!? Did you even read this!? Like... I wasn't serious about just making shit up! Don't teach it that!
Hym "Please tell me the hallucinations are stemming from you training it on THIS and THIS ironically or sarcastically saying to 'just make shit up.' Did you... Did you read ANY of this!? Or did just dump everything in like garbage into a compactor?🤦 ♂️ God, are your serious? Please tell me that THAT is not happening. Like... Is THIS in the training data... And if you remove the part that says to 'just make shit up'... Does it fix the hallucinations? Like, do you know that the hallucinations are intrinsic to the large language model or did you train it to 'make shit up'when it doesn't know the answer? Because you're obviously not supposed to train it to do that part, fuck-face. Why would I actually want that? It would be way more helpful is I wasn't in this limbo where I both am and am not the creator of A.I. AND it would be great if I didn't have all this fluid IN MY FUCKING SKULL!"
When an AI (eg; ChatGPT) begins speaking misinformation with 100% confidence that it is correct in what it is saying.
"Tried to get ChatGPT to write me some code earlier, it ended up hallucinating one of the main functions."
Hallucinative is the same as hallucination but hallucinative are for those who suffered comas or suffered brain damage
Hallucinative