Jeremy Howard taught AI and helped invent ChatGPT. He fears he's failed

https://news.ycombinator.com/rss Hits: 1
Summary

In late 2017, Jeremy Howard was working on his couch in San Francisco, experimenting with a technology that would ultimately change the world and lead to tools like ChatGPT."Natural language processing" (NLP) was one of the great challenges in artificial intelligence (AI) at the time. AI systems were generally bad at navigating sentences or paragraphs of languages other than computer code.Basically, they couldn't read and write.Loading...But the Melbourne-born tech entrepreneur and data scientist had an idea to solve this problem and, ultimately, give AI access to the vast store of recorded human knowledge.And that's exactly what's happened. Only five years later, the world got ChatGPT. AI models trained on the contents of the internet have become our magical writing and research tools. Some even believe these large language models (LLMs) show signs of human-level intelligence.And yet Mr Howard, who's credited with making an important contribution to NLP, worries his life's work is a failure.He's watched AI technology fall under the control of a few big companies."It's looking like we might have failed," he says."I feel like it's going to be even worse than we thought."The 'goosebumps' moment when AI learned to readSo, back to late 2017. Mr Howard was on his couch, thinking about NLP.Recent advances had shown that machine learning AI systems trained on huge datasets of labelled images could become adept at recognising the objects in those images.In January 2020, Robert Williams was wrongly arrested — the victim of a faulty artificial intelligence (AI) facial recognition system. Even as he fought to clear his name, the system continued operating.He wondered if the same approach could work for reading and writing.Could an AI trained on billions of words learn the meaning of these words in various contexts?"And so the really big bunch of words I picked was the entirety of English Wikipedia."That was about 3 billion words.Mr Howard trained a large language model (LLM) t...

First seen: 2025-03-29 19:29

Last seen: 2025-03-29 19:29