Smarter-Than-Humans A.I. Will Likely Be Here by 2030

https://www.nytimes.com/2024/12/11/business/dealbook/technology-artificial-general-intelligence.html?unlocked_article_code=1.g04.NvpK.JIkFEx9vYdan&smid=url-share

{{ In education, Ms. Guo said, research has long shown that the thing that delivers the biggest gains in student achievement is one-on-one tutoring. “What if you can give everybody a personalized tutor?” she asked. Or personalized medical advice that is as reliable as a human doctor? The potential for artificial intelligence to democratize the availability of expertise, she said, is “something we’re really inspired by.” }}

I’ve long wanted a RoboDoc as long as it doesn’t come with a hefty, private equity “skim”.

intercst

2 Likes

Mgmt would never allow the sale of a RoboDoc. Lease or rental only.

You’re not wrong about that. Photoshop has been SaaS for a number of years now. Ditto MS Office. Plus nearly all the big corporate suites like Slack, Zoom, etc.

I bought a new Windows 11 PC last week and Microsoft Office 2010 loaded fine. No way I’m paying $10/month for Excel.

intercst

4 Likes

Technologists: Smarter-Than-Humans A.I. Will Likely Be Here by 2030

I’m going to go on record and say “I don’t think so. In fact, I’m pretty sure of it.”

Of course it matters what you mean by “smart.” If it’s to do one narrow task over and over and do it well, then yeah. Look at x-rays and find cancer or something, OK, sure. Play mix and match with proteins and see which ones sequence more rapidly in a test tube, OK, I’ll buy that too.

But “smarter than humans” is a phrase thrown around carelessly, and it matters. Let’s start off with the brain - your brain, anybody’s brain. It’s estimated that it has the capacity to store 2.5 million gigabytes, or 2.5 Terabytes, and I don’t doubt it. I can see a move from 1967, or a Three Stooges reel I saw when I was 8 and say “Oh, I’ve seen this.” Now I might not remember every jot and tittle of the action or dialog but somewhere in the deep recesses of my brain is the ghost of that video, enough for me to recall it 60, 70 years later. That’s a crap-ton of data being carried around in my head.

Can a computer do that? Sure, but as we speak it takes AI a warehouse full of chips and racks and cooling equipment and real estate and power, lots and lots of power to do so. And I am mindful of the early Univac machines and how they have shrunk to the size of a desktop and now to a phone slab, so I don’t discount that the AI data centers might do that someday. But not 6 years from now.

And for the record, most of the amazing AI stunts or tasks are being defined narrowly: AI that does medicine, AI that will tutor, AI that will answer phone calls, that sort of thing. The human brain, by contrast, is a general purpose instrument (excluding those savant or autistic people who can count toothpicks in a box at a glance or do similar feats like hyperthymesia) and there are billions of us walking around the planet compared with what, a few dozen or hundred data centers? (That 99% of all those human brains are underutilitzed is a different issue.)

And while I could write a magazine length piece about this, I’ll give just one more example: dreaming. Computers, even the best AI ones don’t do that yet. They may hallucinate, but that is a different thing. They can synthesize prior data, they can turn yesterday’s jazz into a different piece, but can they enjoy it as well? Can they do anything but regurgitate, except in different order? More to the point, could they ever invent jazz, as humans have done? Or country music? Or brutalist architecture, pointillist painting, or dream of turning horse&buggy’s into self-driving automobiles?

I don’t think so. Certainly not in 6 years. Maybe not ever, although that’s an absolute I wouldn’t stake my reputation on. By 2030? Don’t make me laugh.

7 Likes

Technologists: Smarter-Than-Humans A.I. Will Likely Be Here by 2030

Like other such articles, they are asking, or assuming, the wrong question(s).
We already have AI than is smarter than (some) humans.
The real question is how many humans is the AI (pick a subject or AI tool) smarter than?

I don’t think we’ll ever get to the point where one AI tool is smarter than every human.
And we don’t really need it to be

Mike

Remember also that these are folks talking their book, so to speak. They’re all folks that are active players or investors in AI and adjacent fields, so they’re going to err on the side of portraying their area of interest as groundbreaking and on the cusp of being revolutionary. Very reminiscent of the blockchain folks a few years ago, who were convinced that technology was going to revolutionize everything, rather than just be used for moving cryptocurrencies around.

The $64 trillion question is whether generalized AI can be used more efficiently than humans in more than a narrow area of endeavor. Right now, generalized AI is very expensive to create and operate (which is why the companies have to spend so much to develop and operate it), but the use cases that it replaces are relatively low value.

The hope is that this will change. But since these companies have all-but-used up all of the available training data (basically, the entire corpus of human written and visual content on the internet), we might be seeing the end (rather than the beginning) of sharp improvements in LLM performance. Because once you’ve used up all of the huge amounts of free data on the internet, you have to start buying or generating more data if you want to keep going.

2 Likes

Why not? What are the limits to growth (to coin a phrase)?

DB2

Well, “ever” is a long time. But then there is no real definition of smarter either.
But a good reason might be economic efficiency.
Or another reason might be how would we know when we have created an AI that discovers something new like Einstein did.

It is also hard to see how an AI would, for example, cure cancer without lots of human help…wouldn’t the AI get bored along the way since it isn’t going to get cancer itself?

Mike

When I retired and returned my work laptop to my former employer, I bought my own laptop (a macbook pro) to replace it and wanted the usual Microsoft tools - Excel (I use all the time), Word (I use periodically), Powerpoint (never used after retirement), Outlook (never used after retirement), Teams (LOL, will never use again). So I bought a license for Office 2021 somewhere. I can’t remember the exact price, but it was very reasonable, surely well under 100 bucks, because I wouldn’t pay more than that.

EDIT: Just found the email confirmation of that order, it was $36.

I will double down. The NYT is full of it.

Data. Electricity. Computing power. The current approach to AI requires various those things as inputs. If you had infinite supplies of those resources, then there might be no limit to what you could get out of them using that current approach. But all those things cost money, so its entirely possible that there’s just no way to economically justify building a big enough LLM with a big enough training data set that it can reach generalized human intelligence.

That might not be the case, of course. People are smart, and computer scientists might figure out ways to get more and more AI “smarts” out of the process without having to scale up the data and chips and electric power too much. But if they don’t, it’s a possibility that the current framework might approach some asymptotic upper bound of “smarts” - where you have to shovel too many more resources to get too small an increment of improvement for it to continue.

2 Likes

The current architecture would never support it. The AI infrastructure is different than the PC infrastructure but still does not support intelligence.

Language is a tool. Language is not a problem-solving tool. The LLM is going down the wrong road.

Therein lies the problem. AI will use its own output as new input, so we end up with the snake eating its own tail for eternity. Essentially, a puppy that caught its own tail and will never let it go…

1 Like