We have lived through every smart, rearview mirror drivers declaring how $AMZN never made GAAP profit is worth $10, and split adjusted $0.5… The same logic and argument is now applied to AI companies. Many will fail, some will be bought out, few will survive and will be wildly profitable.
Not sure that’s an apt comparison, as there was already a developed market for “rides for hire”. You could estimate what the potential was, show how you could make it even better by democratizing it (and avoiding the dreaded medallion costs), and show a business model which stood as capital-free toll taker between drivers (and cars) and customers.
AI has a few use cases, and I don’t know anyone who thinks there won’t be some, perhaps a lot of uses, but the economic future is simply unpredictable. It may be huger than huge. Or it may be confined to special business-to-business uses. Or have vast military applications. Or be a wonderkind for consumers.
That’s the thing, nobody knows. There’s certainly a lot of money pouring in, because those on the playfield feel like they can’t be the one that’s left out.
(As for the Amazon comparison, well, there was the Sears Roebuck catalog, which was pretty profitable for a very long time.)
“ex post” I don’t want rehash all the criticism and some of the outlier comments. Both businesses spent for decade+ to build scale, before turning profits. Those who are worrying where is the profits, should understand “this is the investment phase to build scale, capacity, and gain market share”
The thing I don’t understand is why are they putting AI on every device? When I use Grok or Chat gpt, is the device doing the seeking or is it happening on the Grok server? I guess I don’t understand how it works. Why does Windows need Copilot? Do I need Alexa, Grok, Perplexity and Siri on my phone? Its pretty confusing. I guess I could ask one of them (lol) and see what they say…doc
Companies are experimenting (throwing spaghetti against the wall), trying to find the killer app. High-end AI will be getting more expensive, but lower-cost models will work for many. Enjoy your free use today.
=== links ===
Big tech has spent $155bn on AI this year.
The LLM Cost Paradox: How “Cheaper” AI Models Are Breaking Budgets, 2025
“This leaves every AI company in an impossible position. They know usage-based pricing would save them, but they also know it would kill them. While you’re being responsible with $0.01 per 1,000 tokens, your VC-funded competitor offers unlimited access for $20/month. … Everyone subsidizes power users. Everyone posts hockey stick growth charts. Everyone eventually posts “important pricing updates.””
The hidden costs of AI: How generative models are reshaping corporate budgets, 2023
“A new report from IBM’s Institute for Business Value (IBV) paints a stark picture of executives’ economic challenges as they navigate the AI revolution. The report, titled “The CEO’s guide to generative AI: Cost of compute,” reveals that the average cost of computing is expected to climb 89% between 2023 and 2025. A staggering 70% of executives IBM surveyed cite generative AI as a critical driver of this increase. And the impact is already being felt across industries, with every executive reporting the cancellation or postponement of at least one generative AI initiative due to cost concerns.”
“At the moment, a lot of organizations are experimenting, so these costs are not necessarily kicking in as much as they will once they start scaling AI,” says Jacob Dencik, Research Director at IBV. “The cost of computing, often reflected in cloud costs, will be a key issue to consider, as it is potentially a barrier for them to scale AI successfully.”
I totally understand for many the 100’s of billions being invested is mind boggling. They wonder where is the profit to justify these investments. The point I was making is profits will come as your build scale.
The below is from Citi conference call, where the CEO Jane talking about AI efforts. Similarly Jamie Dimon talked about they are investing $2 B and their breakeven on these investments is one year… that is they recover their cost within a year…
There are many industries that are finding ways to use AI to optimize their work, increase efficiency, etc.
We are committed to embedding AI into how we work. Nearly 180,000 colleagues in 83 countries now have access to our proprietary AI tools and have used them almost 7 million times this year. These tools save hours each day by automating routine work, analyzing data and creating materials in minutes instead of hours.
Our Services and USPB teams are using AI to resolve client inquiries faster. In Wealth, advisors are gaining real-time insights that help them deliver more personalized advice. And AI-driven automated code reviews have exceeded 1 million so far this year and are dramatically improving our developers’ productivity. This innovation alone saves considerable time and creates around 100,000 hours of weekly capacity – that’s a very meaningful productivity uplift.
In September, we launched a pilot of Agentic AI for 5,000 colleagues. It allows complex, multi-step tasks to be completed with a single prompt. And the early results are very promising, and we’ll expand access to this in the months ahead.
OK, but that doesn’t mean they can invest zero and get all those benefits next year, too. He says they’re using it for fraud detection, risk management, and “to drive customer engagement”. Tell me which of those he’s not going to need to do next year - and pay for the computing time and fees that come along with the use.
It sounds to me like they spent $2B and $2B worth of benefit. It also sound to me as though almost none of that carries over to the following year: they wil still have to make new presentations, answer client phone calls, be ever vigilant for fraud, judge risks in a changing marketplace.
Of that I have no doubt, and I’m sure some of them will pay off. Many will not, especially at the beginning, before people learn what works and what doesn’t.
AWS laid off hundreds of people in July, claiming that AI was going to make things more efficient, blah blah blah. A few days ago AWS had a cascading failure, which may or may not have been linked to the layoffs (reports differ). Just sayin’
And don’t get me wrong, I’m all for improving productivity. Lord knows we tried to do that continuously when I was in the business world - so there are great hopes here. There are also some pitfalls.
I can’t help but wonder who gets the blame when the AI tell a client the wrong thing, or makes a mistake, or offers a terrible opinion and the client follows it, or misprints a quote, or … At least if it’s a guy at a desk you can fire him; what do you do when the AI goes wrong?