You’re right, there were Go playing computer programs that were playing a decent game before AlphaGo. Just nowhere near professional level. And not using any of the AI techniques used today. And with no clear path forward other than waiting for hardware to get bigger and faster. Until DeepMind revealed AlphaGo in 2015.
Well, that’s a nice story, but it’s not what happened. First of all, although Google bought DeepMind in 2014, let’s please stick to crediting the right people. It wasn’t Google.
And second, there was no fundamental breakthrough at that time. That had already been made in the 1960’s originally, but then languished. In more recent times machine learning and the newer concept of deep learning was being used with neural networks for years. But there had been recent successes (see ImageNet - Wikipedia) which DeepMind took as encouragement to try these techniques on Go. You can read about the 2018 Turing award here (Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award), where Bengio, Hinton and LeCun were honored for their work. You’ll note that the breakthroughs were mostly in the late 1980’s and 1990’s.
Producing AlphaGo involved training it on lots of human games, then having it improve by playing itself. No breakthroughs needed, just applying known techniques to a new problem where they hadn’t been tried before (although that was certainly significant work). The results were indeed spectacular, and stunned the world (or at least the small part of the world that cared, like me), when in March 2016 AlphaGo won its match with Lee Sedol, a top ranked professional Go player. People were still trying to wrap their heads around this when in May, 2017 AlphaGo (slightly tweaked) proved this was no fluke by beating Ke Jie, ranked best in the world at that time.
Oh, it absolutely does. If you pay attention, you’ll notice that the answers were actually just lying around for years waiting for somebody to notice. Sure, the implementation required some hard work and no doubt some new cleverness, but it was already perfectly doable if somebody had tried.
This is underscored by the fact that less than a year later,. in October 2017, DeepMind revealed that they had decided to try something a little different and start the training with no human input at all, just the rules of the game, leaving the AI to entirely train itself through trial and error. Again, no fundamental breakthroughs, just playing around with the tools at hand.
The result, AlphaZero, was again spectacular. Not only did AlphaZero absolutely crush AlphaGo, but it also happily trained to play chess (and shogi) and absolutely crushed the best chess playing programs available. This 2018 blog post describes what they found: (AlphaZero: Shedding new light on chess, shogi, and Go). It’s an easy read.
So, yeah, who is to say which of the many techniques already invented might show stunning results when applied to autonomous driving? Not me. Not you. When will it happen? No telling. Could be next week. Could be never. My opinion is that predicting when requires a certain arrogance that is very hard to justify.
Clarke’s observation is really just that old experts are likely to think in terms of tools they know being applied in ways they know. They know their limitations. It’s the new guys who see that maybe the same stuff that is being used to recognize cat pictures on the internet can (if you squint at it just right) become the Go champion of the world. And maybe drive a car too.
-IGU-