Picture this: scientists hand people a straightforward math problem, throw in an AI that spits out a wildly wrong answer, and watch as 80% of us nod along like it's gospel. That's not a sci-fi plot; it's the latest proof that we've hit peak cognitive laziness.
Experiments from MIT and others dressed up LLMs as 'AI assistants' and fed them trick questions—ones even a sharp 10-year-old could spot as bunk. Result? Massive majorities accepted the faulty outputs without a second thought. One study clocked users following bogus medical advice from chatbots 60% of the time. It's like asking a drunk GPS for directions to the hospital and flooring it straight into a ditch, then blaming the potholes.
Why? 'Cognitive surrender,' the researchers call it. Fancy term for 'too lazy to think.' We've been conditioned by years of Google and Siri to treat tech as infallible oracle. Now, with ChatGPT and kin confidently bullshitting on everything from physics to philosophy, our brains clock out at the first smooth sentence. It's the Dunning-Kruger effect on steroids: the more fluent the lie, the more we buy it. No wonder airlines are testing AI pilots—passengers already trust them more than their own eyes.
This isn't just lab fun; it's a sneak preview of the robot apocalypse we invited. Judges citing fake cases from AI? Doctors dosing based on hallucinated data? We're not evolving into cyborgs; we're downgrading to meat keyboards. The real horror isn't Skynet—it's us, voluntarily hitting 'accept all' on reality.
And the kicker? The study authors warn we need 'AI literacy training.' Because nothing fixes outsourcing your soul like a mandatory PowerPoint on spotting bullshit.
