Not directly related to investing, but for those interested in how the current AI boom got started, including Nvidia’s past role as one of the 3 pieces that came together (and what it means today and for the future), this is really great article:
The 3 pieces:
- Prof. Fei-Fei Li at Princeton (then Stanford) created a large (14 million) database of labeled images, leveraging Amazon’s old Mechanical Turk to source cheap labor to label in 2 years what they estimated would have taken the team 18.
- Geoffrey Hinton (4 different universities) and others developed the backpropagation technique. One of his researchers went on to Bell Labs and eventually trained a system to recognize check handwriting that was adopted by banks in the mid 1990s. But, it struggled with larger, more complex images than checks.
- Jenson Huang led Nvidia to create CUDA, which helps split up big computing tasks into separable smaller chunks, each of which can be processed simultaneously by Nvidia’s GPUs.
Huang argued that the simple existence of CUDA would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008, Nvidia’s stock price had declined by seventy percent…
In 2009, one of Hinton’s groups used an Nvidia system for speech recognition. He asked Huang for a free system to work on images, but they said no. They nevertheless acquired a 512-GPU system on which AlexNet was built and trained.
Prof. Fei-Fei Li was running a competition, called ImageNet, using a smaller set (10% of the database her team had built) of images. The first two years had interest, but not great results. The 3rd year, 2012, AlexNet destroyed the competition, and CNNs became the talk of the AI world.
Hinton formed a company that Google acquired just months later for $44million.
Nvidia went on to become the largest public market cap company in the world.
Even so, more than a decade later, many AI researchers still thought that AI systems had to mimick how humans thought and problem solved. In 2019, Rich Sutton wrote “The Bitter Lesson,” that, citing examples, concluded:
We want AI agents that can discover like we can, not which contain what we have discovered.
Which basically means systems that look at a lot of data and develop their own conclusions about it, not humans programming computers with what we think the conclusions are.
Note that neither of the links I’ve provided require a computer background to understand. Reading them may help you understand better what Nvidia offers today.