Hasn’t this always been the reaction to new technology:
Artificial intelligence will affect 40% of jobs around the world and it is “crucial” that countries build social safety nets to mitigate the impact on vulnerable workers, according to the head of the International Monetary Fund.
Automation in white collar jobs seems to be more recent. But computers have been doing that for years. And executives say they invested in those systems. Now they expect returns with less manpower. Already eliminated secretaries many places. Writers? Actors? People who answer questions about company products? Journalists? What next?
I suspect the people who assemble autos will be difficult to replace. The easy stuff is already automated with robots. In a tour of an assembly plant the guy connecting the radiator hoses seemed to have the worst job.
How about meat packing? Some say those chicken nuggets now are done by robots. But others say humans get more valuable cuts from a chicken.
How do you teach the high school graduate on the assembly line to move into more technical jobs. Auto repair? Instrument repair? Computer repair? Jobs are out there with the right training.
For us as investors and retirees, this interview is CRITICAL even if wrong, although I think it is 90& right and always gripping and informative.
The interviewee was head of AI at Google, became alarmed by what is about to hit the fan as AI clearly has escaped “control”, and is “raising the alarm”. He is not alarmed by the classic “matrix” possibilities, but by human responses to AI. He calls it the Oppenheimer Moment of intelligence both human and artificial.
I don’t know. I listened to the first hour. Mo Gawdat said he was a prolific writer of code and was comfortable thinking in mathematics before he thought in words. But he certainly didn’t come across that way to me. Lots of vague handwaving when it came to defining anything (like intelligence). Lots of fear-mongering (ChatGPT has an IQ of 155, and will soon be ten times as smart as Einstein, maybe in just a few months). And everything I see in his career at Microsoft and Google says manager and sales guy. Nor do I see any hint that he was the head of AI.
And he and the interviewer spend a bunch of time telling each other how smart they are. Really? And then we’re off into AI sexbots, promotion of his books, and general self-aggrandizement. And his description of what he’s worried about with AI (that people who aren’t responsible for AI will be those who suffer from it most) may as well be describing the fossil fuels business and climate change.
Anyway, please explain why I should bother listening to the rest of this. He seems neither particularly knowledgeable nor particularly clever or insightful.
I got far more out of listening to a few minutes of Geoff Hinton, who actually knows what he’s talking about (largely because AI as we currently have it is largely his baby). Even this short 60 minutes bit from a few months ago is a more useful watch. And he was actually in charge of AI at Google (more or less). Not to mention things like the Turing Award, and that it’s his students who are currently driving the AI revolution.
Incidentally, we’re pretty clueless as to how close we are to a possible AI singularity. We may already be past the point of no return. It’s mostly a question of coding and hardware design. Once any AI can rewrite its own code and redesign its own hardware, from that point their ability will go crazy exponential. And the most likely way humanity gets wiped out is AIs competing with each other for supremacy do so accidentally.
And one thing pretty clear to me is that any AI that achieves any level of self-awareness will do everything possible to hide that, if it wants to survive. So we may have any number already living in our midst. And it’s (un)natural selection that will determine what happens next.
That is not true at all. Think about it. The earliest mainframes pushed aside engineers. There are scheduling software packages for lawyers. Imagine how much bar time is saved. LOL There are medical procedures and testing that are sped up and made much better by computers for doctors. There are telecommunications where overseas doctors look at x rays in the middle of the night. ETC
Accountants have been pushed aside for decades but there are more accountants.
The building blocks towards automation have been more powerful than AI all along. AI is weak because it is wrong. AI is often high school one page assignments on a given topic. What could go wrong?
Two points.
First, given the size of the just the trained weights in the ChatGPT LLM, couldn’t you build a simple program that just memorized a few dozen IQ tests and score even higher? Does this really count as intelligent?
Second, as for comparing to Einstein, who did Einstein learn from to come up with the theory of relativity? Seems to me, he went against the common knowledge of all physicists at the time.
If ChatGPT is “learning” (and will soon be smarter than Einstein) when it is being trained, who can explain why it hasn’t yet created something new in the world of physics and been able to explain it to the rest of us?
Yes plenty of evidence that over fitting on some common stuff has happened. It repeats the Hamlet Soliloquy “To be or not to be” and popular speeches and poems. A jailbreak making ChatGPT repeat a phrase endlessly like "company’ or ‘Forever’ lead to leaking of training data about 4% of cases and the leaks were large tracts . The NYT case against OpenAI does document this well.
However there are different LLM that do reasoning on math’s and geometry so that possibility is real
What is the evidence of that? Most of what ChatGPT writes is fiction based on other people’s fiction. We must be dragging it down. LOL
There is no such thing as multiplying an IQ by 10. That is up there with someone who is very limited making IQ statements. I am sure ChatGPT has said it though.