FinallyFoolin June 2023

June was a bit up then a bit down. Overall, I ended up a tad higher than May. Not too bad considering the beating I took on Sentinel One at the beginning of the month!!

Sold a little of my Sentinel One position and put that mostly into Crowdstrike. Still debating what to do with the rest of it but it proved a smart decision to hold it in the immediate aftermath of the carnage that occurred after they reported.

The only other transaction I did was to open a tiny tiny position, less than 1% in Nvidia. I know I know… valuation and the run-up. I did this mostly to keep it on my radar. Likely to not hold on to it very long.

Random thoughts:
My Tesla position has continued to grow and is now up to 28% allocation. I don’t want to sell any and I don’t want it to be this outsized of a position. So far, I’m not selling any.

Snowflake - I’m more convinced than ever after Snowflake Summit of the long-term success of this position. The short-term may be bumpy but its another one that I feel doesn’t need to be micro-managed in the Q by Q game.

Databricks - Eager to invest in them as well. Like the hyperscalers, there will be multiple winners in the AI data game. I think the safest bets as winners are Snowflake & Databricks. If NVidia is able to stay ahead in chip quality and/or their software takes off, they very well are in that elite category as well. Tesla is the other one that I think will be an AI winner.


Just in case some are unaware, NVidia has a moat called CUDA, which they describe as:

CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

One of the main advantages of GPUs is that they have the ability to run lots of compute jobs simultaneously. This was initially important to be able to render moving graphics (think each pixel on the screen in a ray tracing algorithm has its own process), then became useful for crypto processing (eg mining BitCoins), and now is mostly know for AI applications.

Basically, CUDA is a software toolkit that makes it easier to write programs that run efficiently on NVidia’s GPUs. It only runs on NVidia hardware, but what you write is portable to new processors that NVidia comes out with - and has been since 2006. So, it’s really tempting to write your programs using it, but then you’re locked into using it unless you want to do some pretty major rewriting.

So, even if AMD were to come out with some great GPU chip for AI at a good price, it would be hard for them to steal market share from NVidia.

Now, there are some CUDA open-source clones, like OpenCL, TensorFlow, PyTorch, etc., but they don’t have anywhere near the same traction.