What's the future for generative AI?

A good layman’s explanation of how neural networks work

What’s the future for generative AI? - The Turing Lectures with Mike Wooldridge

There is an old joke that says that the specialist knows more and more about less and less and the expert know everything about nothing. Clearly Wooldridge knows a lot about AI but he is missing a lot. I’m going to point out some cases (the numbers are approximate time stamps):

  1. Generative AI comes up with stuff it was never trained on.

This is a well known phenomenon in complex systems called Emergent Properties

  1. It gets stuff wrong a lot 32:30
    It’s making its best guess 33:40
    You have to fact check 35:15

Some 40 years ago Richard Feynman explained how science works

  • Make an educated guess
  • Apply the scientific method to test its validity
  • You have to give Wooldridge credit for suggesting fact checking
  1. GAI should be able to do anything a human being could do 48:40
    Load up a dishwasher
    And it won’t happen any time soon 49:05

That was before the Nvidia presentation with a handful of humanoid robots, about thee months after the Wooldridge lecture was published

The Captain

4 Likes

A more practical talk about language models by Andrej Karpathy, former head of AI at Tesla.

[1hr Talk] Intro to Large Language Models

The Captain

1 Like

Is AI close to maxed out in what it can do?

Not even close. Why would you think that?

I am not sure where the capacity limit is.

Code can only do so much even with physical hardware hooked up.

The language model has been a huge advance. But I know https://www.rev.com/ has been working on the other end of language AI. It is mixed success falling far short of the company’s hopes 6 years ago.

Tesla has fallen short.

It is not for lack of trying. There are inherent problems in AI that are getting brushed aside right now. There is so much money why ruin the party?

I am a proportions guy. Odd thing but when things do not add up that is often obvious.