Artifarcical Intelligence

Musk (Elon, not the rat) has signed a letter along with 1,000 other lesser luminaries asking Microsoft, Google, et. Al to voluntarily pause developed of AI for six months until we sort out some of the intricacies of it. Sure, like that’s going to happen; I can just hear the guy at Google say “Yes, let’s stop for 6 months while Microsoft continues….”

Also, I would like to sign a petition to ask Musk to stop using me as an involuntary participant in his “self driving development on public roads”, but I digress.

I have tried ChatGPT a few times in the last few days, and I must say I have seen the light. It’s like the difference between AOL dialup and a cable modem, except the cable modem is still spitting out random stuff once in a while.

But looking up stuff is a light-year of difference. Instead of a bunch of random Google links (has Google gotten worse these last couple years? I used to be able to find what I want), I get a paragraph or two which actually hones in on my query and attempts to put some context around it. It’s also wrong, sometimes, as you have doubtless read.

For instance, I asked “Give me a list of father/son business partners where the son eventually succeeded better than the father.” ChatGPT gave me a list of 5, three were correct, one was close, one was flat wrong. (It gave me Steve Jobs and his father because, apparently, Jobs was more successful than his father. Except the father wasn’t in business with, well, you get it.) The “close” one was Ford, with Henry and then Henry Ford II, who wasn’t the son (that would have been Edsel), but who was related.

Anyway, the screen wasn’t a jumbled mess of crappy links, it was a pleasure to read, and it didn’t require clicking through to 7 other sites 4 of which would end up being irrelevant anyway. I see a big threat to Google unless they can tame it and monetize it. I haven’t thought about that, but I am in awe of the leap forward.

Sorry Elon, nobody’s holding back, I bet.


So that Musk can catch up? ummm…no

Steve…long MSFT


This entirely thing is going to be one of the great tech fails of all time.

People are going to suddenly pull the plug on AI. It is too irritating as a human being to have this information and much of it wrong or very over detailed.


Last year one of my sisters got Alexa. She loved the responses. This year the AI is more detailed, my sister is screaming, “shut up Alexa”!! Alexa’s days are numbered. Speak to anyone who has any of these devices, suddenly the entire exercise…all of the exercises, are ultra unpopular right now.

Two months ago this was the next holy grail of tech. Try getting a radio announcer to say that now. A fad would last longer. This has become more like a brutal rejection. Most of you do not expect that but it is what it is. If you want to deal with AI god bless you. The rest of us do not want to hear about it. Note I am not talking about posts here discussing it. I am talking I do not want to have any AI device in my living room. If I am visiting a friend I do not want them interrupted by their device. If I am shopping for goods online I do not want garbage to wade through…etc…and frankly you can tell from a mile away the mechanical nature of the responses…The radio ads that wont stop with ill timed voices.

What do you mean “catch up”? Reportedly Musk invested in one of today’s AI leaders at its founding!

1 Like

If you really think that is where this AI revolution has its growth then you really don’t understand what is going on here in this space.

Back to the OP. No way this “pause” is going to happen. Nobody is going to pause, and if they do it will be only for public display. Too much money at stake after all. Besides, who’s going to pay all those employee during this pause? What will they do?

You can’t stop the future.


We go through this with every new technology. Contrary to popular opinion, the sky is not falling.

New technologies usually get implemented gradually. One fraction of their clients at a time while they monitor performance and work out any glitches. Recall facial recognition? Recall self driving autos?

Why should this be any different? Its something new. Making a big splash. People will be tinkering with it, trying to get it right for years or even decades.

Same old, same old.


I believe there is a huge difference between full self driving autos which have to be correct 99.99999+ percent of the time in real time and using something like ChatGPT to get 75% + correct. My wife and I use the ChatGPT 3.5 and find it extremely useful and time saving. The productivity increases are real if you spend time to learn how to use it. Sign up for the free 3.5 version and run these questions:
explain concept of edges and vertices as you would talk to a six year old
write a one hour lesson plan for first graders that conforms to Texas TEKS teaching the geometric concepts of edges and vertices
Can you create a rubric
How about a 15 question homework assignment

1 Like

They said that about the Hindenburg and the Edsel.

:rofl: :rofl: :rofl:

Love you but when you find yourself unplugging the machine get back to us. It wont be all that long.

In a NVDA GPU AI is very important. It is a part of the process not exactly the process of the graphics. As a fully fledge software package in other applications it is hellish at times. It does not work or play well with humans. The exceptions to that will definitely survive. That will be a smaller number apps. Most are going to fail because of their own designs.

1 Like


This is kind of a funny or something comment in a NYT column series on AI. This particular snippet was written by Kevin Roose. Now remember AI feeds itself all the materials on the topic it can. In other words AI in this case below has fed itself all the answers and still not come out 100% correct.

snippet (after a crushing couple of paragraphs on how bad AI is…)

But I’ve also seen these A.I. programs do amazing things — feats of creativity, flexibility and efficiency that took my breath away. And I worry that in an attempt to tamp down A.I. hype, skeptics are missing what’s so groundbreaking — and potentially disruptive — about this technology.

Large language models write poems and screenplays. One of the latest, GPT-4, scored in the 90th percentile on the bar exam and got top scores on a number of Advanced Placement tests,. And the potential productivity gains for workers are enormous. (In one study, programmers who used GitHub Copilot — an L.L.M. for coders — finished a task 56 percent faster than programmers who didn’t.)


What is the top 1% bar exam score?


Depending on how many people have taken the UBE, a score of 280 is approximately the 73rd percentile. A 300 is in about the 90th percentile, and 330 is in the top 1% of all scores.

Question is that result impressive compared to human students? Why? I honestly need to hear why.

Leap, just a few months ago, if you told me we’d have a ChatBot with the capability of GPT3 I would have told you “no way, not yet, this is still a ways off”. Then it happened. And GPT4 is even better, and it did not take that long.

Is it perfect? No. Does it still need work? Of course. Will it improve? Absolutely.

Keep in mind. Perfection is not needed. Humans, after all, are far from perfect and we seem to do pretty good.


Human chat is very informal, very inaccurate and very poor skipping words and making grunts at times, putting words in reverse order etc…and we all get it. Good luck training a machine in human speech patterns.

Ever notice how hard it is to read Huck Finn for the first half a page and then suddenly every word is the norm again? That is because unless you or I make a better effort that is at bottom how we speak.

Hold my beer…


That’s pretty incredible. It even copies human “uh” patterns in speech!


You notice the machine did the voice on the left that we honor. Prejudicial evidence. I was talking about creating the voice on the right.

It was very clean. But that is not how we speak. That is how we write.

The title of this video is “Google Assistant calling a restaurant for a reservation”, so the left side is the AI in this case.

There are also AI implementations for taking a reservation (or for booking flights, or for doing banking, etc).


Making myself clearer

:rofl: :rofl: :rofl:

The king’s English is not a speaking voice. I get the left hand voice was the AI. I am saying the human on the right is the goal if you want AI to appear human.

I repeat myself for Leap.

The reservation recording makes it quite clear:

This was NOT Alexa or Siri talking. This is so much closer to normal human speak. Do you really think we aren’t getting there quickly?

The Turing Test will be passed soon. It’s possible it already was. A ChatBot convinced a human it was blind to get around a captcha:


To your question no and we are nowhere near getting there.

Getting a machine to speak the King’s English is only scraping the surface. The surface is more presentable than the guts of how humans speak.

The odd thing in person we have limited patients for each other. Do not expect patients for a machine no matter how good or bad it is. People do not want to listen all day long to instructions or more detailed thoughts of another. We like distance. Pulling the plug is a very real option.

The NYT has another column on AI future successes today. I do not have time for it right now.

In the last few years, I have discovered my interest in the science of endocrinology. I’ve been using Chatgpt on occasion to look for the answer to more exotic questions. Unfortunately, I have found that Chatgpt answers basic questions (the ones that I don’t have to ask) with relatively good accuracy (maybe 80%), but is completely random on the more interesting, exotic cases. When I asked a question about DHEAS and ACTH, it told me with supreme confidence that they were independent. They are not. But it told me with such confidence that I was sufficiently disconcerted that I looked it up.

So far, for me, in the medical field, chatgpt is at best of very marginal use.


Not to worry, the VP is on the case.