AI: Accelerated Incompetence

https://news.ycombinator.com/rss Hits: 4
Summary

In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can't replace human critical thinking. The text in this essay was written without any use of AI. A speculative inverse correlation between LLM dependence and IQ By now much ink has dried on the wave of AI and LLMs which crashed upon the public consciousness in late 2022. As an experienced software engineer, I'd like to speak to two troubling engineering perspectives I've observed on LLMs. "LLMs are my friend" I don't think anyone believes that a computer program is literally their companion, so let's address the euphemistic intent of the above phrase: namely that an LLM conveys magnificent benefits upon its user. Engineers who view LLMs as an ally invariably prioritize or feel pressured to prioritize velocity; for them, production trumps perspicacity. While it's true that LLMs can deliver a lot of code quickly, their use carries a long tail of risks. Risks of using LLM Output Risk. An LLM can give output that is blatantly incorrect, for example code that won't compile. More likely and dangerously, it can give output that is subtly and undetectably wrong, like logic bugs. The risk is elevated if the prompter is not qualified to evaluate the output, for example project managers prompting for source code. Input Risk. An LLM does not challenge a prompt which is leading1 or whose assumptions are flawed or context is incomplete. Example: An engineer prompts, "Provide a thread-safe list implementation in C#" and receives 200 lines of flawless, correct code. It's still the wrong answer, because the question should have been, "How can I make this code thread-safe?" and whose answer is "Use System.Collections.Concurrent" and 1 line of code. The LLM is not able to recognize an instance of the XY problem2 because it was not asked to. Future Velocity. This is your typical "tech debt" argument, but more urgent. AI can degrade the quality of your codebase so fast. Have you ever seen the fruits of hoardin...

First seen: 2025-05-28 12:00

Last seen: 2025-05-28 16:01