This is an attempt to structure my thinking when assessing AI-related stocks that potentially have the features we look for.
And since this board is for discussion of specific high-growth stocks, I do provide some (admittedly sketchy) thoughts on $NVDA towards the end of this post.
I’m trying in this post to steer clear of itemizing low-margin components of the AI ecosystem. Instead, I want to focus on factors that are relevant to the criteria we use here.
If I’ve missed any important factors relevant to our investment criteria, please chime in.
AI Programming languages and frameworks
Open Source : Not an investment-decision factor.
NOTE: Google developed TensorFlow, and it is designed in tandem with its proprietaryTPUs, but TensorFlow itself is Open Source
Proprietary
$NVDA has a proprietary AI programming framework called CUDA that is available only on, and is optimized for, Nvidia GPUs.
It works the other way too: Nvidia GPUs are optimized for CUDA.
Drives future GPU sales via Vendor lock-in.
Proprietary AI Lifecycle Platforms and services, AI as a service
Nvidia AI Cloud
MS Azure AI Solutions
IBM Watson
Picks and Shovels plays that might fit our criteria
Compute: $NVDA, $GOOG (…they make their own TPUs running TensorFlow)
Database with AI-specific functionality: $MDB
Data Lake (…housing potential feedstock for AI algorithms): $SNOW, DataBricks
B2C: direct interaction of Consumer with AI services
OpenAI: DALL-E, chatGPT ($MSFT is an investor)
B2C: services powered by AI behind-the-scenes
$GOOG (search, language translation), $AMZN (Alexa), $NFLX (recommendations)
B2B: services powered by AI behind-the-scenes
$IOT, $AI
Some things to watch for / be aware of (random list):
“Data Analytics” vs. AI: Not the same thing. I suspect that lots of organizations that slap the term “AI” on their marketing are not really doing AI; they are actually only doing data analytics like old-school Data Warehousing, Graph-organized databases etc. Claims of use of AI should be chased down. For instance is $IOT really using AI…does it really require AI to tell a driver he’s braking too hard or is idling too long?
The “AI Effect” means that stuff that used to be considered AI is no longer really thought of as “AI”: optical character recognition for instance. Monitor your “AI” stocks to ensure they are not about to be victims of the “AI effect.” Will $IOT be a victim in the near future?
I suspect that lots of companies are claiming or implying their AI initiatives are further along than they actually are.
Developing and deploying new AI capabilities is MASSIVELY EXPENSIVE. Companies with huge amounts of cash have an asymmetric advantage in rolling out AI products and services.
Potential for sudden and shocking AI developments: the pace of new available AI capabilities is accelerating, potentially at an alarming rate. For instance, $NVDA is currently using its own GPUs and software to obtain solutions (…to incredibly difficult manufacturing problems) necessary for manufacturing its next generation of…GPUs!
AI trained on publicly available data (chatGPT) or easily-purchase-able data ($UPST) will become commodity services subject to a low-margin profit model.
Companies with unique and proprietary data have the potential to leverage it by using AI to create additional services/revenue from the data…like $IOT is doing right now.
The promise of the value-add from AI could provide even more motivation to digitize business processes. In other words AI might accelerate the current digitization trend.
References:
“History” and “Goals” sections provide concise information about the SHEER SCOPE of the remaining challenges yet to be solved in the realm of AI: Artificial intelligence - Wikipedia
I think you may have missed a couple of really big ones. Tesla probably has the largest data lake with all of the telemetry of all of its cars plus an unknown amount of video data provided by its auto pilot systems.
Additionally, telecoms have all of the telemetry data from all of the cell phones, location, signal strength, types and sizes of data streams. I have significant doubts about the telecoms ability to monetize this. However, others that can lease, buy or otherwise tap into this data lake can make a huge amount of money.
Also, images and video are huge for training AI and useful for raw material for AI to iterate off of, so photo sharing sites that have been back waters may fare well. Also, the largest repository of video and images I know of is Facebook with more pouring in everyday. Facebook may be an actual treasure of data for AI.
At some point, I think I will just have to close my eyes and poke and hope with a position in Tesla and Nvidia. AI is simply
moving too fast to understand.
Your most excellent posts left me with some questions. As I am not feeling well I refused to get out of my easy chair and decided to conduct an interview with ChatGPT via my Iphone mini using voice to text and what I think is ChatGPT 3.5. Note: ChatGPT 3.5 and 4.0 are not natively tied to the internet and the information in the
models is a couple of years old. Had I gotten up and conducted this interview at the computer with Bing on the Edge browser I would have been using ChatGPT 4 and it would be connected to the internet.
I am posting the interview here and recommend anyone remotely interested, log into Edge and Bing, you need a free Microsoft account, and conduct a simular interview.
———————————————-
what are the resources that AI uses?
ChatGPT
Artificial Intelligence (AI) uses a variety of resources to function effectively. Here are some of the key resources used by AI:
Data: AI systems require vast amounts of data to learn from. This data can come in various forms, such as images, text, audio, and video. The more data an AI system has access to, the more accurate and effective it can be.
Algorithms: AI systems rely on complex algorithms to process and analyze data. These algorithms are designed to identify patterns, make predictions, and make decisions based on the data they receive.
Computing power: AI requires significant computing power to process and analyze data quickly. This computing power can come from various sources, such as cloud-based servers, high-performance computers, and specialized hardware like GPUs and TPUs.
Human expertise: AI systems often require human expertise to train and optimize them. This expertise can come in the form of data scientists, machine learning engineers, and other AI specialists who can fine-tune the algorithms and models used by the AI system.
Sensors: In some cases, AI systems rely on sensors to collect data from the environment. These sensors can include cameras, microphones, and other devices that capture information about the world around them.
Internet of Things (IoT) devices: IoT devices can be used to gather data and feed it into AI systems. These devices can include smart sensors, wearables, and other connected devices that provide valuable insights into user behavior and preferences.
Overall, the resources used by AI are diverse and depend on the specific use case and application.
imagine that they are gross tenfold in the next year. What will be the largest consumption of resource increase?
ChatGPT
If the use of AI grosses tenfold in the next year, the largest consumption of resource increase is likely to be in computing power. AI systems require a significant amount of computing power to process and analyze vast amounts of data quickly and accurately. As the volume of data and complexity of algorithms used in AI systems increases, so does the need for more powerful hardware to support them.
The use of specialized hardware, such as GPUs and TPUs, is becoming increasingly important in AI applications. These hardware accelerators are specifically designed to speed up the processing of complex machine learning algorithms and models, which are at the core of many AI applications.
Additionally, the increasing demand for cloud-based AI services and the growth of edge computing devices could further drive the need for more computing resources. As AI becomes more ubiquitous and integrated into various applications and devices, the demand for computing power is likely to continue to rise.
Other resources such as data, algorithms, human expertise, and sensors may also experience an increase in demand, but computing power is likely to be the largest consumption of resource increase if the use of AI grosses tenfold in the next year.
all of this compute takes electricity will there be a significant changed to the electrical usage or in the overall scheme of things will it be a noticeable
ChatGPT
The increased use of AI is likely to have a noticeable impact on electrical usage. AI systems require a significant amount of computing power, which in turn requires a substantial amount of electricity. According to a study by researchers at the University of Massachusetts, the energy consumption of large AI models can exceed that of an average American household.
As AI continues to grow and become more widespread, the demand for computing resources will increase, and with it, the demand for electricity. This could lead to a significant increase in electricity usage, particularly in data centers that host cloud-based AI services and in locations where AI is used extensively.
However, there are also efforts underway to make AI more energy-efficient and sustainable. For example, researchers are exploring the use of specialized hardware and software techniques that can reduce the energy consumption of AI systems while maintaining their accuracy and performance. Additionally, the use of renewable energy sources to power data centers and other computing facilities could help mitigate the environmental impact of increased electricity usage.
Overall, while the increased use of AI is likely to lead to a noticeable increase in electricity usage, there are also efforts underway to make AI more sustainable and energy-efficient, which could help mitigate its environmental impact.
Open AI has said that the largest use of chat GPT was creating spam for Viagra sales. This is not earth changing what avenues of research might open might AI radically change.
ChatGPT
While it is true that the largest use of the GPT model has been for tasks such as generating spam and fake news, there are many other avenues of research where AI has the potential to radically change the world. Here are some examples:
Healthcare: AI has the potential to revolutionize healthcare by improving diagnostics, drug discovery, and personalized medicine. For example, AI can help doctors to analyze medical images and identify early signs of diseases such as cancer, or to analyze genetic data to identify patients who are at risk of certain diseases and develop targeted treatments.
Climate change: AI can help to address climate change by optimizing energy usage, predicting and mitigating the effects of natural disasters, and helping to monitor and preserve natural resources. For example, AI can be used to optimize the placement and usage of renewable energy sources, such as wind turbines and solar panels, or to predict weather patterns and optimize resource management in response.
Transportation: AI can help to improve transportation systems by optimizing traffic flows, reducing congestion, and improving safety. For example, AI can be used to optimize traffic light patterns, predict traffic patterns and suggest alternative routes to drivers, or to monitor vehicle performance and predict maintenance needs.
Education: AI has the potential to transform education by providing personalized learning experiences for students and improving educational outcomes. For example, AI can help teachers to identify student learning gaps and develop targeted interventions to address them, or to provide students with personalized feedback on their work.
Overall, while AI has been used for many trivial and potentially harmful tasks, there are many other areas where it has the potential to make a significant positive impact on the world. As research into AI continues, we can expect to see many more groundbreaking applications of this technology in the future.
It seems you were talking to me in boilerplate. might AI be able to push the boundaries of the classic three body problem such that we could maybe attack anti-aging health span expansion, and maybe do better predictions of macro economic activity
ChatGPT
Apologies if my previous response seemed like boilerplate. To answer your question, AI has the potential to push the boundaries of many scientific and economic fields, including those related to anti-aging, health span expansion, and macroeconomic predictions.
Regarding the classic three-body problem, which involves predicting the motion of three celestial bodies under gravitational attraction, AI has already been used to make significant progress in solving this problem. For example, researchers have used deep learning algorithms to predict the orbits of exoplanets, which involves solving the three-body problem for systems with multiple planets and stars.
In the field of anti-aging and health span expansion, AI can help to analyze large datasets of genetic and molecular data to identify potential targets for interventions that can extend healthy lifespan. For example, AI can be used to identify genetic mutations or biomarkers that are associated with increased risk of age-related diseases, or to analyze the effects of various interventions on cellular and molecular aging processes.
In macroeconomics, AI can help to improve predictions of economic activity by analyzing large datasets of economic indicators and financial data. For example, AI can be used to predict stock prices, identify trends in consumer behavior, or predict the effects of policy changes on the economy.
Overall, AI has the potential to make significant contributions to many scientific and economic fields, including those related to anti-aging, health span expansion, and macroeconomic predictions. However, it is important to note that AI is still a developing technology, and there are many challenges that must be addressed to fully realize its potential in these areas.
I do note that AI is changing fast. How fast is it changing we have noticed that open source AI is iterating faster than closed source AI what speed can we expect these open sources AI to iterate at?
ChatGPT
AI is indeed changing rapidly, and open source AI is generally iterating faster than closed source AI. This is because open source AI allows for collaboration among researchers and developers from around the world, which can lead to faster innovation and development.
The speed of iteration for open source AI will depend on various factors, including the availability of computing resources, the size and diversity of the community of contributors, and the complexity of the problems being tackled. However, in general, we can expect open source AI to continue iterating at a rapid pace, driven by ongoing advances in hardware, algorithms, and data availability.
One notable example of the rapid pace of open source AI development is the progress made in the field of natural language processing (NLP) over the past few years. Open source projects such as Google’s BERT and OpenAI’s GPT models have achieved groundbreaking results in NLP tasks such as language translation and text summarization, and new models and techniques are being developed and released at an increasingly rapid pace.
Overall, while the speed of iteration for open source AI will depend on various factors, we can expect it to continue to accelerate, driven by ongoing advances in technology and the collaborative efforts of researchers and developers around the world.
can you provide some sort of visual graph of the rate of change of AI over the past 24 months with a projection of rate of change over the next 24 months
ChatGPT
As an AI language model, I do not have the ability to create visual graphs. However, I can provide you with some information and insights on the rate of change of AI over the past 24 months and a projection of the rate of change over the next 24 months.
Over the past 24 months, AI has continued to rapidly evolve and improve, driven by advances in deep learning algorithms, increased availability of data, and improvements in computing power. Some notable developments over the past 24 months include:
Advances in natural language processing, including the development of large-scale language models such as GPT-3 and T5.
Breakthroughs in computer vision, including improved object recognition and image generation capabilities.
Developments in reinforcement learning, with new techniques being developed for training agents in complex environments.
Advancements in hardware, including the development of specialized chips for AI workloads, such as Google’s TPU and Nvidia’s GPUs.
Looking ahead to the next 24 months, we can expect AI to continue to evolve and improve at a rapid pace. Some of the key areas of focus are likely to include:
Advances in generative models, including improved language and image generation capabilities.
Continued development of reinforcement learning techniques for complex decision-making tasks.
Greater focus on ethical and explainable AI, as concerns over the potential risks of AI continue to grow.
Continued development of specialized hardware for AI workloads, with new chips and architectures being developed to accelerate AI training and inference.
Overall, the rate of change of AI is likely to continue accelerating over the next 24 months, as advances in algorithms, data availability, and hardware continue to drive innovation and progress in the field.
this is great information, but as a human, I need a metric that I can look at and measure is there a metric that we can look at and then provide benchmarks on a monthly basis
ChatGPT
There are various metrics that can be used to measure the progress and development of AI, but it’s important to note that the field of AI is complex and multifaceted, and no single metric can capture the full range of advancements and challenges in the field.
Some common metrics used to track progress in AI include:
Accuracy: This measures the percentage of correct predictions made by an AI model on a given task. For example, in natural language processing, accuracy is often measured by the percentage of correct answers given by a language model on a set of questions.
Speed: This measures the time it takes an AI system to perform a given task. For example, in computer vision, speed is often measured by the number of frames per second an object detection algorithm can process.
Efficiency: This measures the amount of computing power or resources required for an AI system to perform a given task. For example, in training deep learning models, efficiency is often measured by the number of floating-point operations per second (FLOPS) used.
Robustness: This measures how well an AI system performs under different conditions or in the presence of noise or errors. For example, in object recognition, robustness is often measured by the percentage of correct predictions made by an algorithm on a set of images with varying levels of noise.
While these metrics can provide a useful benchmark for tracking progress in AI, it’s important to note that they are not necessarily comprehensive or definitive, and different metrics may be more appropriate for different types of AI systems and applications. Additionally, progress in AI can be affected by many factors, including advancements in hardware, algorithms, and data availability, as well as changing social and economic conditions. Therefore, it can be challenging to provide accurate monthly benchmarks for the progress of AI.
“Tesla probably has the largest data lake with all of the telemetry of all of its cars plus an unknown amount of video data provided by its auto pilot systems.”
One thing I need to look more into is how important quantity of such data is vs. quality of data and quality of the AI algorithm. Can the training process be optimized so that it can be done with less data and less compute?
Also, I think to get effective self-driving, AI training will require additional datasets besides just telemetry and video, and I’m skeptical AI technology is even capable of self-driving at this point pending further breakthroughs.
“Also, the largest repository of video and images I know of is Facebook with more pouring in everyday. Facebook may be an actual treasure of data for AI.”
Good point!
One thing I forgot to add to the list is the need to transform raw data into a format that can be ingested by the AI training algorithm…not sure if GPUs or CPUs are used in that stage.