Jensen Huang says AGI could be here in 5 years, if

the definition of AGI is the ability to pass human tests

“If I gave an AI … every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I’m guessing in five years time, we’ll do well on every single one”

However, since there is as yet no standard definition of what AGI is/ is not, ‘by other definitions, Huang said, AGI may be much further away, because scientists still disagree on how to describe how human minds work.’

1 Like

I hope it’s not as mundane as that. To me “intelligence” means not only spitting out facts (and getting them mostly right), it also means understanding the basis for the facts and being able to project them into entirely new and different meanings, fields, and appliances.

It’s the difference between observing mold in a Petri dish and correctly reporting as Fleming did “the bacteria near the mold died” and projecting that forward into “maybe we can synthesize that and come up with an antibiotic” as Florey did 15 years later and invented penicillin. “The mold died” is a fact, but without intelligence it did no one any good.

Of course if/when that happens it will be the biggest revolution in human history, but methinks it’s still a ways off.


Tha lawsuit that Elon Musk filed against OpenAI alleges that they’ve already created AGI in the form of their Q*-star model which can creatively apply mathematical concepts.

OpenAI’s Q* Algorithm: Solving Complex Math Problems - DevX



Hi Goofy - of course Huang is talking his book, but the fact that he’s in the driver’s seat on the hardware side of this AI gold rush gives him quite some credibility on the imminence (or not) of so-called ‘AGI’

Also, he’s re-confirming what quite a few of the leading lights in the field have forecasted - that some form of ‘digital intelligence’ will emerge by the end of this decade

Yes intelligence should mean making deductions and inferences beyond the obvious facts, but there’s also the question whether our current - and quite limited - understanding of human intelligence is the only one that matters.
I doubt it.

Geoffrey Hinton, for instance, now says ChatGPT4 already is much more than a highly competent word predictor, and that the only question of whether digital intelligence will overcome biological intelligence is the abundance and cost of the energy required to drive it
Prof. Geoffrey Hinton - “Will digital intelligence replace biological intelligence?” Romanes Lecture (

1 Like

Here is the “test” that I would want a hardware company to do to establish if AI has surpassed humans. Put all the so-called AGI hardware and software in a building and request that it design and manufacture the next generation of AI hardware that is 2x the performance with no human involvement. Compare this to what a team of humans can do with or without using any AI tools.



How big a team of humans do you imagine that job will require? Do you imagine that any one human could perform every task involved? Then why expect just one to do it all? How about if AI successfully replaced just a few key individuals? Or multiple AIs, each handling one aspect?

Thousands. First you have to design, build and test the next generation of FAB tools (i.e. EUV lithography). Next you have to build all the circuit layout and simulation tools to be compatible with the new FAB process. Oh, yeah you have to create and do test runs on the newer FAB process. Now you need to architect the new chip design, iteratively simulate it and get the total performance and power consumption at the targets to be within the thermal design limits.
Finally you do the chip layout and send it to the FAB, get samples and debug, look for performance bottlenecks and iterate once or twice to achieve the performance required.
This all assumes all the existing software doesn’t really need to change, otherwise you need to optimize before you re-compile and retest a million or so regression tests.

Absolutely not.
There probably isn’t even one person that understands enough to completely explain each step.

You could choose to have many different machines each trained in its specialty if you want.

Sure, but we already use machines to help build newer machines. We use machines to search and sort through designs, simulate designs, graph test results, summarize them, categorize them, compile code, etc.



True, but all those things are 100% iterative. AI, as presently constituted, takes the masses of human produced data and deftly summarizes and/or transforms it. It can look at millions of faces and decide which are most likely to commit a crime based on historical norms. It can give you a picture of a green superhero riding a red train across a yellow landscape. It can even find the best parts of every car ever designed and combine them into one super-duper car.

What I’m talking about is “can it look at the Wright Brothers plane” (in its day) and say “This is what a 757 will look like in 60 years”? I think not, because it has no capacity to “think forward” because its entire foundation is “what has already been done.”

Or more to the point can it look at a rocket ship as currently made and say “Oh, that’s silly. In 100 years they will be shaped like an egg and will take off horizontally using divariable beeble percher thrust, and here’s how you do it .

I don’t think so, but I do think humans will figure something out (OK, it might not be egg shaped, etc. but it will not be so chemically dependent and wasteful as our vertical rockets are today,)

AI will be great at many things: scanning X-rays for tumors or painting random pictures of dogs or writing thank you notes to Grandma or even improving designs for chips, but that is not intelligence . I think we’re a very long way away from that,

1 Like

I think that was my point. AGI trained machines, by themselves won’t be able to design a new chip that requires some creativity in many areas. But as tools to assist humans they may be great as multipliers for human effort/creativity as they have already been in non-AI roles.
Maybe the multiplier will be larger for AI-based tools


The history of technology has been as multipliers for human effort. From the wheel to the screwdriver or simple wrench, hammers to EVs, all have made life easier, better, more productive. Not one, so far as I can tell, has made it “more creative”, although they have certainly enhanced human ability to be so.

AI generated art, music, sculpture is, at least to this point, still derivative (even if occasionally wildly so) of human effort. I’m waiting to see something that isn’t.

Multiplication is just really fast addition. So far AI seems to be to be, so far as imagination and creativity, much the same: not truly original, just with a lot more mechanics underneath the surface.

Is that really the question, though?

A “human intelligence” doesn’t have to be all that creative. Many human beings aren’t. Your typical below-average high-school dropout might not be able to do any of the things that are being discussed in this thread, but they possess a “general intelligence” that would certainly count as AGI if we could reproduce it in a machine.


Actually, DeepMind’s AlphaFold already has far outstripped human capability to predict 3D protein folding structures based on 1D amino acid sequences in terms of accuracy, speed and scale.

How one’s proteins (long chains of amino acids) fold often significantly determines one’s future health/ disease profile. Earlier, predicting how any one protein would fold took several man years spent on lots of trials and errors and fancy equipment, but resulted in low accuracy and very low output.

Here’s Dr. Eric Topol on ‘3 major biology advances here that would not have been possible without AlphaFold.’

1 Like

Is there any art form (or, for that matter, scientific advance) that isn’t almost entirely derivative, since that’s the nature of human innovation - taking what is already done/known and giving it a new, ‘never before’ spin?

The roots of almost every single great piece of music/ art/ writing etc. can be traced back to a least one or more earlier ‘influences’ - only difference is there we conveniently don’t scorn it as plagiarism

Both the Theories of Relativity and Quantum Electrodynamics, similarly, are developments of several different strands of thought in ‘classical’ physics, either extending or filling the gaps in earlier thinking

1 Like

I won’t doubt that that’s true, but “faster” and “more” doesn’t fit what I think of as “intelligence”. (More below). In fact I think the link you provided explains it pretty well:

Not long ago scientists might spend [2 or 3 years to define the 3-dimensional structure of a protein], relying on many advanced tools like [cryogenic electron microscopy] and crystallography. It was painstaking work and only a relatively limited number of proteins could be approached, many unsuccessfully. Now that can be done for nearly all proteins in a matter of minutes, thanks to advances in A.I. Even new proteins, not existing in nature, never previously conceived, [can now be created]

I added the bold: it’s “more” and “faster” - even quantum leaps, but I have trouble assigning that as “intelligence.” (I acknowledge that the unbolded sentence gives me pause, but then it seems to still require human intervention to make the leap to “undiscovered”, not to mention “undiscovered and useful”.)

I found the first comment after the article interesting:

I'm quite concerned about the label "A.I.".

The extraordinary achievements made in identifying protein folding configurations were the results of efforts made by extremely intelligent coders who designed a PROGRAM which is neither artificial nor intelligent.

The analogous model would be semiconductors. The fact that quantum computing works and tunneling occurs in non-intuitive ways makes neither the electron nor the computer intelligent.

This is not a mere question of semantics. Turing notwithstanding, the use of such monikers creates a perceived reality which has not been shown to exist.

Point well taken, perhaps my definition is a bridge too far, however I find the distinction important. (Perhaps it’s just me.) The leap from spark gap radio waves to vacuum tubes was a discontinuous jump, and from vacuum tubes to transistors likewise. (From transistors to microprocessors perhaps, but really a microprocessor is just a gazillion transistors shrunk down and somehow affixed in a pattern which allows their use.)

For me it’s human intelligence which brought forth the vacuum tube and then the transistor; could (so called) AI have done that? As it’s currently being deployed I don’t think so. Someday? Perhaps, we shall see.

It’s those revolutionary jumps that I define as intelligence, but I am willing to stipulate that almost everyone, and especially the media, is happy to call improvements in “bigger, faster, better” as AI.


Hi Goofy - your skepticism is well justified - even necessary, as a key pre-requisite of scientific progress. I too am skeptical of much of the hype around the supposed imminence of AGI. Lots of ‘irrational exuberance’ going around, to put it mildly :slight_smile:

However, AlphaFold does not need human intervention to actually make its predictions. Well before it was put to work, its Deep Neural Network was trained by a combination of Supervised and Unsupervised learning using 100,000 known protein structures (vs. the known universe of 300,000,000 terrestrial proteins)