Jeez I hope that’s wrong. One of the problems is they’ve been training AI on everything they can get their hands on, and there’s a lot of crap out there. What they need is vetted content, but they don’t want to actually pay royalties to the Times and Post, and YouTube and CBS, and Wikipedia and Brittanica, so they’ve been using, and I am not making this up, Facebook, X, and Reddit threads - which is how we get to “glue the cheese on your pizza so it doesn’t slip off.”
The phrase “garbage in - garbage out” was never better designed than for this nonsense. (Even funnier, Google is employing humans to manually remove stupid answers. Now there’s a solution that scales.)
Tesla has a ton of data, but it’s all related to “driving a car”. That may be valuable, especially to Tesla, but it’s not going to answer a lot of questions about “how to make a pizza”. Sometimes less is more. And if “less” isn’t enough, then AI is never going much of anywhere.*
*Yes, there will be some applications where it works, like maybe separating plastic bottles from other plastic containers, or perhaps mixing paints to match your grandmothers shawl, but in the realm of “information/solution” it’s gonna be a long, rough road - or a more expensive one, which nobody (yet) wants to contemplate.
Remember that the most important AI is not the currently media dominated AI that creates crap by imitating patterns in existing stuff, rather it is AI “thinking” about every possible application of every possible protein, solving “impossibly” difficult problems in quantum theory, and the like.
I believe politically based human bureauracies will remain a bridge too far for quite some time…
My youngist motorcycling friends (geezers like me do not ride in heavy urban traffic if they are sane) tell me that self-driving vehicles are much much less likely to be dangerous to them, because they expect motorcycles exist, and so watch out for us.
The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant
[Nvidia] reported eye-popping revenue last week. [Elon Musk] just said human-level artificial intelligence is coming next year. Big tech can’t seem to buy enough AI-powering chips. It sure seems like the AI hype train is just leaving the station, and we should all hop aboard.
But significant disappointment may be on the horizon, both in terms of what AI can do, and the returns it will generate for investors.
The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI. New, competing AI models are popping up constantly, but it takes a long time for them to have a meaningful impact on how most people actually work.
These factors raise questions about whether AI could become commoditized, about its potential to produce revenue and especially profits, and whether a new economy is actually being born. They also suggest that spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble
Much less is not overstating it. It’s understating it. That’s because FSD drives differently than a human drives. On the surface during normal operation it appears that it drives like a human (that’s the goal after all), but if you observe certain scenarios closely while it is driving, it doesn’t behave like most (many?) humans do. The main behavior that it doesn’t have is that is doesn’t assume that the other vehicle will move out of the way, it assumes the opposite. So while driving on the road, if it sees a motorcycle (or a car), and it senses that one or both vehicles are changing speed such that they might occupy the same space in X seconds, the FSD will adjust to avoid that UNDER ALL CONDITIONS. Most humans would initially assume that the other vehicle will change their movement pattern to avoid this issue, but that doesn’t always happen. I also noticed that FSD is far more courteous than I am. Quite often I observe it making space for the guy that is pulling up on my left to slide into a my lane of traffic that has been patiently waiting to exit. It even makes space for the a$$hole that attempts to slide in RIGHT AT the exit!
Yes! But even more so, AI self-driving necessarily SEES things that are not cars!!! The two drivers who almost killed me literally did not see me because, like zillions of human drivers, they habitually only look for and so only SEE automobiles.
Riding a bicycle in Amsterdam was revelatory for me because bikes rule there and keep autos in a terrorized state of caution.
Human level? There’s a wide range of possibilities that encompass human level. I could see MTG level AI coming very soon, but if we’re talking about AI that operates at a level comparable to a normal human, that’s a ways out.
I’ve been building a foundation for AI for 18 months now. We are just getting to a capability where we can apply broadly based (but highly specific!) LLM style AI to it.
In short, our “AI” is the move from traditional data query building to low-code to no-code.
Just ask the question and get a highly customized report of our own ideation.
Our data set is highly specific
It’s highly correlated
It’s precise, continuous and accurate
None of it will transfer broadly to another topic.
None of the tuning (parameters - training results) will apply outside of corollary style tangents.
SNOWflake mentioned that our effort is the furthest they’ve seen in any heavy industry.
While it will dramatically improve our efficiency, effectiveness and speed to action for white collar decision makers, it will need orders of magnitude more development to grow to other use cases.
It’s foundation? Pristine, ordered data.
The magic will really start to happen when we get enough parameterization to automatically bridge the gap between something like what I’m building to ANY other use case. (This is general AI)
Specialist AI is already here. The value in specialty AI/ML is where the most improvements will be made in the next 2-4 years.
For these reasons, humans will need to “play to the zones”* and will be essential to the work stack for a very long time.
(I didn’t say any human!) The skillset most needed will be akin to spirit of a renaissance (poly math) individual who has studied and understands “the gaps”
Plenty of jobs will be available for the common operator who simply prompts the machine with basic commands, however.
JMHO
'* Imagine a space where work is to be done doing many activities. specialist AI will be able to handle problems and workflows within it’s bounds, but will not provide reliable process in the gap between its area of core competency and that of the other AI. Since these gaps vary in complexity, context, frequency, impact, observability and any number of other aspects, it will be one of the last areas to be innovated upon with AI style solutions. This space will be the last bastion of most human operators for processes at scale.
Notable examples will be things like:
Current pluming update for fluid system updates where standards have changed or have been built with no apparent standard (think 1920’s home with 1980’s cpvc to copper updates, etc. and mansion style extensions with uniquely characteristic high water pressure)
Improving upon an isolated/archaic design while attempting to bring into service an item that is aesthetic in nature/value. (highly subjective results to follow!)
Applying DNA specific treatments for a member of a population with a disease rarer than 1/100,000, and taking into consideration 40 years of individual life. (highly individualized results required and low ability to train on a dataset)
The 3 body problem (just kidding. I wanted to see if you’re still reading along! :D)