I have been involved in AI for 25 years with an application for Optical Character Recognition. I got into it because I refused to take a pay cut and resigned instead (there is nothing like being underappreciated to force a job move :-)) and fortunately had enough funds to go for it with myself and a.n.other as programmers.
The reason for going for OCR was that the products available in 1999 were rubbish with lots of crashes, very slow and very inaccurate.
Our application was rules based but there are strong echoes with whatâs going on now elsewhere. We used vast amounts of data from various fonts in different sizes and styles, as much data as we could find on the web, created our own data of random character pairs to ensure we had a more even concentration of rarely used characters so that we could at least differentiate them. We downloaded Wikipedia in numerous languages and used that to build database files for each language which helped predict the middle character from the surrounding 6 characters.
The product was fairly successful, more so pre-covid than now.
Now I donât know much about current AI systems, but I understand they are statistical in nature, a lot of the effort goes into predicting the next word correctly. Statistical cues are not a sign of intelligence imho.
I tried to use ChatGPT to see if it could program. I wanted it to find the next bit set black in a black and white row of pixels. It was rubbish and totally failed, even with a lot of prompting from me (I already had a working routine). So how can Jensen Huang say programmers will be redundant soon? Donât think he really means it, he is just talking up his products which are just expensive toys which are not intelligent and never will be.
Probably donât have the time left to thoroughly investigate this, but this is a bulletin board, so thought the topic deserves an airing here from people who probably know a lot more than me about the way it works.
Disclosure Iâm out of Nvidia but AMD is still a big holding.
Thatâs not Artificial Intelligence, itâs Expert Systems. Back in the day it was called AI but around the same time AI was pronounced dead.
Neural network AI mimics how the brain works. What it lacks is the scientific method to be fully intelligent.
The lizard brain is a pattern matching machine. It can recognize faces and voices in milliseconds by matching the input to the stored memories. This is vital in fight or flight confrontations. Get it wrong and you are dead. Since only the get-it-right survive, evolution favors the better choice. There is also a boolean part of the brain which is what we think of as being intelligent.
Nobel Prize winning physicist Richard Feynman, in his Quantum Physics lectures (available on YouTube) said, âYou start with a guess. You always start with a guess.â Then you apply the Scientific Method to the guess. In a fight or fight situation we donât have time to think about what to do, to apply the scientific method. Thatâs what differentiates lower animals from higher animals.
The Captain
o o o o o o o o o o o o o o o o o o
PS: I started writing code at IBM in 1960 and ever since I have been thinking about how the brain works based on curious experiences. In my very first job I told my boss I could not make the code small enough to make it fit the IBM 650 computer. He replied, âIt fits!â Back to the code and back to the boss several times, I could not make it fit. Then one morning at around 4AM I awoke with the solution as clear as daylight. I could not wait for IBM to open to try it. It worked.
My boolean rational brain could not find the solution but my subconscious did. All it did was to remind me that the Table Lookup instruction could replace a lengthy bit of heuristic code. The subconscious found the pattern, the guess. Getting it to work on the computer was applying the scientific method.
If you are using the free ChatGPT to code, then you get what you pay for. Companies are using software that is not free, like AIXcoder, IntelliCode and others.
I have no expertise or commentary on the likelihood of AI replacing coders.
Yes the subconscious does seem to do a lot of work. If I am undecided about something, then I often choose to sleep on it and see how it looks in the morning.
I like to do that too. Better to sleep on major decisions. Second thoughts are often better. And they do better at factoring in additional connections or considerations.
Of course, AI should do all of that electronically and instantly.
I had a similar situation when coding the bridge play in Hoyle Classic card games. I needed one last algorithm to finish defense at bridge. When the computer defender was on lead, and there were no better clues as to what suit to switch to available to you (perhaps because you were out of cards in a suit), the defender still needed to make a decision. At that time, working on bridge for 60+ hours a week, I often dreamed in code, which wasnât very restful. But one night I dreamed the perfect algorithm, only 12 lines of code. I woke up, wrote it down, and slept deeply.
In the morning, I didnât understand it! It took me half an hour to reverse engineer it, and a beautiful thing it was. I wrote down 18 more lines of comments and went off to work. Fifteen minutes later, at my keyboard, it only took me five minutes to understand it. It worked perfectly, and indeed the defense at Hoyle was reviewed as surprisingly good.
Yes, it was an expert system, and yes, I was an expert bridge player and particularly good at defense, and yes, I called it AI more than once to save half the syllables but always with a caveat that it wasnât really AI. The best computer bridge declarers are better than the best human declarers these days, but in bidding humans (well, the very best humans) still have a decent lead and a smaller lead in defense. All the other classic games, with Go now solved, are best played by a computer and last I read, are all expert systems.
When I started programming full time, a friend who managed coders told me that generally the best programmer on a team of five does about 80% of the work and the worst is a net negative. The key is to identify and fire the negatives as soon as possible. He said that his estimation of how long it would take to code something was to envision him doing alone, with no meeting or other interruptions, then multiply it by three, and heâs be pretty close for his whole team. Either he was better than I was, or he had better programmers working for him, but once I switched from Fortune 100 companies to computer games I had to use a rule of six, which still worked better than most of the other leads at Sierra Online.
The usual formula to estimate project completion time (and cost) in a typical chemistry research lab is 2a + b where a is a reasonable estimate and b is a very large integer.
No smileys?
In my case/example ChatGPT gave a very confident answer that was almost completely wrong.
I was interested in your statement âNeural network AI mimics how the brain works. What it lacks is the scientific method to be fully intelligent.â
I know vendors are saying that it mimics humans but is it true?
ChatGPT passed the Turing test imho (itâs responses were human-like) but canât help think that there is a huge void in there.
It also seems to lack the ability to test code with examples or at least ChatGPT told me it couldnât.
It matches the pattern matching but it lacks the scientific method to verify the output. Richard Feynmanâs comment is to the point, a scientific theory or hypothesis is but an educated guess. The output of ChatGPT is but a statistical guess! The next major iteration of AI has to be fact checking the neural networkâs output.
IMO, no, AI does not mimic how the brain works. Yes, the basic idea of a large network of neurons is similar. But when you look at how all the various so-called models are composed of dozens or hundreds of layers to create the ability to recognize images; this is far from how the brain works.
And if you look at how humans detect faces and then do facial recognition, just as another example, they are even further from how a brain works. (Hint: check out how the FFA part of the brain works)
But this doesnât mean that computer vision, LLMâs and other computerized Artificial Intelligence are not useful and amazing.
Mike, not to quibble with you but to explain what I mean by âmimicsâ or 'works like" Iâm not referring to the details of how the work is done but only to the high level strategy used. The neural network statistical output is the equivalent of the brainâs pattern matching algorithm.
Size matters. the Universe depends on size, the brain depends on size. The largest computer centers doing AI are microscopic compared to the human brain. Billions of neurons and trillions of synapses
The human brain consists of 100 billion neurons and over 100 trillion synaptic connections. There are more neurons in a single human brain than stars in the milky way! During development, neurons navigate this complex cellular environment and assemble into functional circuits. How the brain develops is not well understood.
I think artificial neural nets were proposed before we had digital computers (or when they were invented). And about 10-20 years later first implemented. The brainâs FFA (fusiform facial area) was first discovered in the 1990s and confirmed using fMRI scans.
See: Fusiform face area - Wikipedia
If we could figure out how to build HW and SW like the FFA, we could probably improve some parts of AI by 1000x or more because the current methods are, basically, dumb brute force methods. Sort of like doing a basic bubble sort instead of a quicksort or some other better sort.
Start here if you actually want to understand the ânew AIâ model like ChatGPT, Gemini etc. that supposedly can code etcâŚ
I really must find the time and energy to plow through all these papers. But from what Iâve seen, Iâm deeply skeptical about these being in any real way âintelligent,â if youâre looking to find a âmental modelâ behind all these predictions that corresponds to something weâd understand as a mental model you could explain rationally.
Iâm not sure, I spent a day interrogating GPT a few months back (with a few long test & think & sleep times) and it was a day lost, no working product at the end of it. I am glad I did it, so I kind of know now how difficult it is but thatâs it.
The link will take you to when I started contributing under the same Sorcery name I use here. Itâs a long thread, 33 pages. They are all worth reading though, I especially liked mc2foolâs on page 1,
"Itâs a laugh a minute with this thing, could even be a Monty Python sketch! Itâs pretty good at bullsh*ting!
Human: which is the fastest flying mammal?
AI: The fastest flying mammal is the peregrine falcon. It can reach speeds of up to 322 km/hour!" and it continues in the same vein âŚ
I understand your skepticism and I think there will be lots of bogus use cases that either donât work or that donât create value if they do work. But Iâm not giving up on the whole thing any time soon, real uses cases will come as the techniques are refined. Wish I had realized three years ago what was going to take shape. Wish I had realized five years ago when a VC of my acquaintance told me that âchatbotsâ were taking up most of his attention at that point.
Training models will continue to be an enormous processing burden, finding ways to pare them down so they run in narrower domains on smaller hardware including mobile devices will make them more useful than they are now at a lower cost.
PE is not unreasonable for a growth company. For now they can sell all they can produce. Yes, much depends on productivity gains from AI. Some may be unsuccessful but you expect the hardware then becomes an asset for sale. For now there should be plenty of buyers.
The major risk is hardware becomes obsolete compared to later generations.
Yes, there is risk. Yes, growth rate will slow. Earnings are a driver. No reason to expect declines any time soon.
Thanks for the replies, caromero1965 and pauleckler. Since this thread might be in a pause, I thought I should reply. Yes Nvidia is making a lot of money for now & growing fast. However from Yahoo finance, the forward PE for Nvidia is 37.45 and for AMD itâs 27.93, canât forget to plug AMD.
I suppose my sceptical outlook for AI is based on not having yet seen a useful outcome using ChatGPT. Microsoft is also finding AI difficult to monetise.