Have you noticed that some ideas in AI take off not necessarily because they’re better, but because they align better with our machines? That’s the hardware lottery: if your approach happens to align with the dominant hardware/software, you hit the jackpot; otherwise, better luck next time. Yet this lottery isn’t a one-time draw. As Sara Hooker puts it, “we may be in the midst of a present-day hardware lottery”. The catch? Modern chips zero in on DNNs’ commercial sweet spots. They are exceptionally good at cranking through heavy-duty MatMuls and the garnish ops, such as non-linear functions, that keep you from getting hit by Amdahl’s Law. Pitch an off-road idea, and it is at best a high-risk long shot. The “winners” in research often align with what our tools run best, and that tendency can skew the trajectory of technological progress. This post draws attention to how generality and programmability are often underemphasized on today’s accelerators, a pattern that risks stifling future algorithmic innovation unless it is actively addressed. The Reign of Matrix Multiplication It’s no coincidence that almost every AI breakthrough involves some kind of NN crunching numbers on xPUs. These chips have made a particular form of MatMul the de facto currency of AI. Crucially, the performance gains have come not just from accelerating MatMul itself, but from the realization that AI algorithms are resilient to reduced precision. Because ML frameworks are built around tensor operations, any problem reformulated as a sequence of MatMul ops instantly taps decades of compiler optimizations and accelerator infrastructure. In fact, we practice a pragmatic “MatMul-reduction,” much like NP-reductions, converting complex tasks into chained MatMuls. But we haven’t shown that all aspects of intelligence reduce neatly to MatMul, and by rewarding only MatMul-friendly ideas, we risk creating powerful yet brittle approximations that trap us in a local minimum. Researchers naturally gravitate...
First seen: 2025-10-14 07:30
Last seen: 2025-10-14 08:31