I got an Nvidia GH200 server for €7.5k on Reddit and converted it to a desktop

https://news.ycombinator.com/rss Hits: 4
Summary

IntroductionRunning large language models locally has always been a game of compromise. You either spend \$10,000+ on consumer GPUs that can barely handle 70 B parameter models, or you dream about enterprise hardware you’ll never afford. The Grace-Hopper platform—Nvidia’s unified CPU-GPU superchip architecture—represents the kind of dream-rig AI infrastructure LocalLlama drools over, with systems typically costing well over \$100,000 and exclusively available to data centers and research institutions.So when I stumbled across a Grace-Hopper system being sold for 10K euro on Reddit, my first thought was “obviously fake.” My second thought was “I wonder if he’ll take 7.5K euro?”.This is the story of how I bought enterprise-grade AI hardware designed for liquid-cooled server racks, converted it to air cooling, survived multiple near-disasters (including GPUs reporting temperatures of 16 million degrees), and ended up with a desktop that can run 235B parameter models at home. It’s a tale of questionable decisions, creative problem-solving, and what happens when you try to turn datacenter equipment into a daily driver.If you’ve ever wondered what it takes to run truly large models locally, or if you’re just here to watch someone disassemble $80,000 worth of hardware with nothing but hope and isopropanol, you’re in the right place.The DealEarly this year, while browsing r/LocalLLaMA/new, I came across a ridiculously good deal. How good? These were the specs for the server offered for 10K euro, and a serious upgrade to my 4x RTX 4090 rig:Specs:2x Nvidia Grace-Hopper Superchip2x 72-core Nvidia Grace CPU2x Nvidia Hopper H100 Tensor Core GPU2x 480GB of LPDDR5X memory with error-correction code (ECC)2x 96GB of HBM3 memory1152GB of total fast-access memoryNVLink-C2C: 900 GB/s of bandwidthProgrammable from 1000W to 2000W TDP (CPU + GPU + memory)1x High-efficiency 3000W PSU 230V to 48V2x PCIe Gen4 M.2 22110/2280 slots on board4x FHFL PCIe Gen5 x16UPDATE:Since I bought this, DDR5 ...

First seen: 2025-12-10 21:35

Last seen: 2025-12-11 01:35