In 1999, two psychologists, David Dunning and Justin Kruger, came up with an interesting discovery that is now known as the Dunning-Kruger effect. It refers to a cognitive bias where individuals with low ability in a specific area overestimate their skills and knowledge. This occurs because they lack the self-awareness to accurately assess their own competence compared with others. The US president is a textbook example, but so too are many inhabitants of Silicon Valley, especially the more evangelical boosters of AI such as Elon Musk and OpenAI’s Sam Altman.Both luminaries, for example, are on record as predicting that AGI (artificial general intelligence) may arrive as soon as next year. But when you ask what they mean by that, we find the Dunning-Kruger effect kicking in. For Altman, AGI means “a highly autonomous system that outperforms humans at most economically valuable work”. For Musk, AGI is “smarter than the smartest human”, which boils down to a straightforward intelligence comparison: if an AI system can outperform the most capable humans, it qualifies as AGI.These are undoubtedly smart cookies. They know how to build machines that work and corporations that one day may make money. But their conceptions of intelligence are laughably reductive, and revealing, too: they’re only interested in economic or performance metrics. It suggests that everything they know about general intelligence (the kind that humans have from birth) could be summarised in 95-point Helvetica Bold on the back of a postage stamp.In that respect, they’re accurately representative of a tech industry that rebranded machine learning as AI in the hope that it would con mainstream media into believing that a rather mundane but interesting technology was about something really important, namely intelligence, without having to explain what that term actually meant. As a marketing stunt, it turned out to be a stroke of genius. But it also presented a hostage to fortune, ...
First seen: 2025-07-05 15:18
Last seen: 2025-07-05 15:18