‘Future Proofing’, for me

I stated in my April port review, “although not at all an immediate concern, I see Datadog the most likely among what I own to be looking at more competition, as As Good solutions (being built on top of Snowflake, utilizing the new highly efficient LLM as a business model) are now inevitable IMO.” I’ve seen a lot of changes in portfolios here. I haven’t seen much in why, with all due respect, other than personal differences in risk appetite and the love of the hunt for where the puck is going. I’m betting a lot on the timing being unknowable for what will shake out, given the quick commoditization of intelligence.

Snowflake 32%

Cloudflare 26%

Tesla 32%

Datadog 8%

Intellia 1.5%

Given the remarks below, by Olivier Pomel, I’m not alone in the above assumption. The following is largely why I consolidated further in my highest conviction positions (future proofing). I don’t see any industry sector not being significantly altered. Therefore, I don’t see diversification as an alternative to this consolidation.

(Bolding, and what’s in parentheses below are mine)

Oliver Pomel, CC, Opening Remarks:
“AI, …long term, will significantly expand our opportunity in observability and beyond. We think massive improvements in developer productivity will allow individuals to write more applications and to do so faster than ever before. And as we surpass productivity increases, we think this will further shift value from writing code to observing, managing, fixing and securing live applications.

In the short to medium term, we believe the rise of AI will increase the demand for compute and storage to train and run models, but it will also increase the value of proprietary data and further drive digital transformation and cloud migration as these are all prerequisites for adoption. We also do expect quite a bit of noise in the market, as the technology stack is progressing and changing very quickly.

Now, from a product perspective, we believe that we at Datadog are uniquely positioned to deliver value to our customers in this new world (sarcastic?). First, we built Datadog from day one as a pure SaaS business precisely to be able to put all that to work at full scale and to train models to solve our customers’ problems.

Second, our large surface of contact with our customers gives us the insertion points to make AI relevant. This is where we see the value of having a very broad customer base and being designed to be used every day by every single engineer.

And third, we serve today some of the largest builders and consumers of AI services and are quickly adapting to their needs in a rapidly changing field.

Olivier Pomel
QnA
First, I’d say it’s – we can all agree that it’s a fascinating time to be alive to see all these rapid innovation in the world of AI. The first thing I’d say is that it’s still fairly early in terms of what the market is going to look like in the AI world. Right now, there’s one particular thing that’s been – that used to be very hard, which was in building conventional models and chatbots and things like that, which almost overnight became almost a commodity, basically. Anybody can incorporate in any application. It’s an API call away. And there’s even a number of different options, commercial open source you can use today. So that just happened. That plan (to incorporate AI into Apps) has massive traction. You see it everywhere. But it also is opening the gate to many, many more – I would say more customized, deeper applications on AI that may be built by a few vendors or may be built by a large number of companies instead. It’s not quite clear yet.

We see, on our end, is that it’s going to drive more compute, it’s going to drive more value in the data that is being gathered by companies, it’s going to drive digital transformation, it’s going to drive cloud migration because, again, you can’t actually adopt AI unless you have the data. You can’t actually adopt it without having a modern architecture and an application you can scale up and down and infrastructure you can quickly provision and deprovision. You need to capture all of your – to capture your data, you need to be digitally transformed. So you have data of all your customer interactions and everything that is proper to your business. So in the meantime, we see that as a very clear accelerant to our business. Maybe with a little bit of noise …out of 200 new things, there’s probably only 10 or 20 that will matter to us all now. But it’s hard to know which ones is out today.

So short answer is midterm, a lot more of the current workload – types of workloads we see maybe with different types of technologies. Longer term, I think we can all glimpse at a future where productivity for everybody, including software engineers, increases dramatically. And the way we see that as a business is – our job is to help our customers absorb the complexity of the applications they’ve built, so they can understand them, modify them, run them, secure them. And we think that the more productivity there is, the more people can write in any amount of time. The less they understand the software they produce and the more data, the more value it sends our way. So this is what makes us very confident in the long term.

So the way we imagine the future is companies are going to deliver a lot more functionality to their users a lot faster, they’re going to solve a lot more problems in software, but there won’t be as tight an understanding from the engineering team as to what it is they build and how they built it and what might break and what might be the corner cases that don’t work and things like that. And that’s consistent with what we can see people building with a Copilot today and things like that. These are very, very good for solving a small problem, but they don’t help you build consistent [indiscernible] or they don’t help you build software platforms like that. That stuff is still out of reach. Again, the way we see the future is we’ll feel customers do a lot more and they will still need help to catch up with everything they’re doing and we’ll be the ones to do that for them.

(?) any partial offsets from organizations having greater intelligence and automation at their fingertips? Are there certain workloads that may no longer need to be monitored by an observability platform?
Olivier Pomel-
In the long term future, everything is possible. But I don’t think – today, I don’t think that’s not what we hear or see.

In terms of what customers do today, it’s hard to project the current adoption of AI into what it might look back into the future because, right now, AI is mostly used as an API call for most companies, but we don’t think it’s necessarily going to be the case one to five years from now.

(For the last A of the QnA)
… Obviously, the expectation for some of those(our) products are changing over time too. You know that everyone can see what can be done with AI. We really expect to see a lot more of that. So, I guess we’ll share more on that in the near future.

Me here:
I know that the promise of AI has been blaring out of the mouths of many a Futurist, for literally decades. But, might that have led us to be complacent when there does appear a paradigm shift staring us in the face?

If Olivier Pomel takes that much time in his CC to delineate the changing role of Datadog
going forward, because of Advances in Large Language Models, what companies are going to benefit most and what companies will no longer be relevant?

Best,

Jason

40 Likes

I may not be understanding what you’re saying. Are you saying the only companies you have conviction in are Tesla and Snowflake and Cloudflare, and that’s because you believe they’re leading the way to the future?

If so my push back would be that I think we all have to realize that we can only know so much about how our companies (or any companies) will perform, or how the future will look.

I have a very different portfolio with only about 2% in those companies, and the majority of my money in:

BILL
Zscaler
Crowdstrike
Datadog
Samsara
Global-e
And a few tiny others

Now you may be right and I may be wrong. But more likely neither of us is “right” or has to be. It is very likely that many of these companies and many others will do well. Maybe they will benefit from AI, or other big innovations.

I guess what I’m trying to say is, I don’t think we can be sure which companies will lead the way to the future. I’m saying I believe conviction can be risky if you trust it too much and concentrate too much.

That’s just my belief. I don’t think I’m “right enough” about any company for a 30% position, let alone three of them. It might work out well, my point is just that I think it’s risky.

Diworsification is a real thing. I’m not saying you should have 15-20 positions, or that you should force conviction into companies where you don’t really believe it. What I’m saying is I think you should have less conviction in these three. Because we just don’t ever know.

But that’s just what I think, so take it for what it’s worth.

Bear

PS - I hope this isn’t overstepping. I’m just trying to create an interesting dialogue for your consideration — and for anyone else who thinks similarly and concentrates their portfolio similarly.

40 Likes

In some respects, I think you answered your own question. As i wrote recently:

I think those “differences in risk appetite and the love of the hunt” have always been here. For the longest time, however, those differences still settled on the same companies as the cream of the crop. That’s why the top handful of holdings in most recaps were so similar. That’s also why I was always more interested in what others were doing with the back halves of their portfolios. To me that’s where the differences in flavor showed up and I learned the most about alternative ways of thinking.

Every company is some combo of growth, execution, cash flow, and profitability metrics. For a long, long stretch the hypergrowth metric ruled all and profitably so. With that dynamic gone we’re starting to more clearly see our differences in style even if the board’s focus remains distinctly growth. For example, you, @GauchoRico, and @FinallyFoolin are all heavy SNOW. Me (0%) and @PaulWBryant (0.9%), not so much. Likewise, Bear, Gaucho are I are heavy BILL. You, Saul, and FinallyFoolin are 0%.

Who’s right? Well, it doesn’t matter because it’s not a competition. It’s about sharing the best info you can and standing back as everyone makes their own decisions. I personally see that as a good thing (and hope that all makes sense).

50 Likes

Hi Bear,

I appreciate you reply here. I feel your genuine concern. Thanks.

I, personally, have come to the temporary conclusion that the only companies I’m certain will thrive in the current environment are not software companies that say they are ‘platforms’; but, companies that are genuine ‘cloud platforms for what is the next paradigm in software solutions’.

Olivier Pomel, CEO Datadog on recent CC-

Prior to DDOG ER, over the last couple months, I’d posted at least a couple times on my reasoning for believing that many of the current business models are about to be disrupted by continued iterations of Large Language Models. I haven’t seen much here on this by anyone else. (Maybe this isn’t the place for it, :thinking:)

Although the Hyperscalers do have the main characteristic I’m looking for, at this time. I don’t believe the Hyperscalers will show the return I’m looking for in the next 1-3 years. Nvidia is a better possibility, perhaps.

Because I maintain a long term mindset when investing, more than 3 years, I have difficulty with finding companies like Snowflake, Cloudflare or Tesla.

Olivier Pomel-
“In terms of what customers do today, it’s hard to project the current adoption of AI into what it might look back into the future because, right now, AI is mostly used as an API call for most companies, but we don’t think it’s necessarily going to be the case one to five years from now .”.

I am not saying that I am at all better than anyone else at picking winners. I am sure that my criteria right now is severely limiting my choices.

Best

Jason

19 Likes

Forgive me, I thought of one more thing to say here. If we put ourselves in 2020 or 2021, I believe we very likely might say that about Datadog and Zscaler and Crowdstrike.

If that is no longer true, it might also not be true of Tesla and Snowflake and Cloudflare a couple years from now.

If that is still true, then the 3 companies you have conviction in aren’t the only ones who will thrive. Worse, they might not even be among the ones that will thrive – at least as investments! (Datadog and Zscaler and Crowdstrike haven’t thrived as investments in the last 2 years)

Anyway, I’ll stop there. Just trying to inject some doubt. I think doubt is a powerful ally for us as investors.

Bear

27 Likes

My level of doubt, in my being correct here, is enormous. I appreciate anyone taking me seriously enough to comment.

I sold my 8% position in Datadog today.

I was holding the money in Datadog partly cause I might be wrong (and partly cause even if I’m right about adding LLMs to Apps is as disruptive as I believe, it may not likely occur as fast as I believe).

Now I’m holding 8% in cash, which I historically never hold more than 1%. I’m doing this because, as I wrote recently, I simply don’t know how this is going to shake out this 2023.

I do believe there’s going to be a mad rush to the cloud now, by every enterprise in the world, in order to benefit from the LLMs. There is no doubt in my mind that every business will massively increase their productivity when they are able to integrate one or more LLMs into there work flows (see what happens when Open AI’s LLM is integrated into Apple’s Siri here: https://youtu.be/9_MbW9FK1fA , if Apple’s IOS can be taken over and enriched, so easily, what’s possible for any Enterprise that has moved their ‘Operating System’ to the Cloud also. For a more in-depth look at where this is quickly going from here and why Nvidia may not be the best investment in this area: https://youtu.be/V6WL4X6pmCY ).

Snowflake may not benefit this quarter, so I’ll likely add the money from Datadog to Snowfake, at what I believe to be then at a further discount. If there is a demonstrable benefit in this quarter, revealed in the Snowflake ER, then I’ll add the cash to MongoDB, where I think we’ll see another leg up there for the same reason.

Best,

Jason

8 Likes

I can only say that every time I have had a position over 30% (about four times in recent years) because I was sooo very confident in them, it has not turned out well. (Think Upstart, or Crowdstrike before it crashed). they always did less well than the rest of my portfolio.
Best
Saul

42 Likes

I didn’t pay attention to this thread earlier because the phrase “future proofing” has a defensive connotation to me. I don’t want to own companies that are reacting to a probable future, I want companies that are actively shaping that future.

When I look at @WillO2028’s company list, however, I see companies that are mostly more active than reactive, so I find myself agreeing with most of Jason’s OP. But I think that this thread has vectored off of the original “how to bet on AI” intent into a high conviction/concentration debate into a portfolio distribution discussion that is a bit less interesting to me.

There’s no doubt that AI is going to disrupt a number of industries and businesses. But Jason’s later comment:

even if I’m right about adding LLMs to Apps is as disruptive as I believe, it may not likely occur as fast as I believe

is, for me, that hardest part of deciding where to put money. 15 or so years ago, it was obvious to me that solar was “going to be big.” However, money put into FSLR then is still underwater, or barely breaking even, today. It’s not just a question of which companies can benefit from AI, it’s which companies will benefit, and then the hardest part - what’s the timetable for that? One only has to look at UPST to see a company that was perhaps too early in its adoption of AI, and found that the macro environment for its business remained the predominant factor in the company’s business results, AI notwithstanding.

Recognizing AI’s potential impact isn’t anything unique to us - just look at NVDA’s rise this year as AI has become the new buzzword. Nvidia is in the catbird seat for AI: Proven hardware (Tesla has been working for years to replace Nvidia GPUs in its AI cloud and still isn’t there yet), good software with vendor lock-in (Cuda only works on Nvidia chips), and solid management through tough times with lots of adjacent/vertical market optionality. Hard to add more NVDA at this price level, though that may indeed turn out well.

I further agree that SNOW, TSLA, and NET are not only positioned well for an AI-driven future both near and long-term, they have management that is super-savvy and are actively piloting their companies to be the profitable forefront of leading AI disruption. So, I don’t share Bear’s concern that the future is mostly unknowable. I can’t imagine a future where Snowflake isn’t one of the top databases of choice for organizations, nor where Cloudflare isn’t one of the top providers for CDN and cloud security services.

But, the timing of that eventual success is harder for me (and I dare anyone) to predict. That’s partly why The Motley Fool takes a LTBH approach - it’s hard enough predicting who will be successful, but getting the timing of success, not to mention Mr. Market recognition of that success, is harder.

At any rate, I’m not looking for companies that will benefit from AI as much as I’m looking for companies that are leading the way with AI. But, of course, not necessarily for all of my portfolio while at the same time I am letting it affect my portfolio concentrations.

50 Likes

I am looking not for companies that are leading the way, nor for companies that may benefit. I am looking for companies that have the talent, market position and leadership to MAKE MONEY off of the rapid changes that AI has wrought.

It seems to me that Bill, Monday, Datadog, Cloudflare, Sentinel One and Snowflake all have the ability to do this. While I have seen things slow down, I have not seen management make huge error in leadership. I have not seen that the companies no longer have the talent, nor am I confident that AI has completely disrupted their business models.

Now I will say this, some, maybe all of the above companies will be disrupted, however, I am clueless as to which ones.

Maybe someone can walk down that line of thought? Who of the above companies is likely the be disrupted? I think that is a more concrete problem than “Who is going to profit the most?”

Also note, I am starting to be in “buy” mode and have almost 30 percent in cash.

I truly believe that at least some of that cash should go into the above
listed companies as well as Nvidia and Tesla. I have seen a “don’t buy, growing too slow” case for Nvidia. However, it might be useful to have a valuation vs growth vs risk case laid out.

Tesla is an unknown, but I think it has some simular market advantages as Nvidia. Since Tesla is not welcome here, and probably too complex the analyze anyway, Nvidia might be a better review.

I have not watched the entire keynote of the last Nvidia day, or whatever it is called, but I did see an excerpt where it was shown that Nvidia was using AI to solve engineering and manufacturing problems that cannot be solved by humans.

I have watched enough of the Sandy Munro pro Tesla videos to come to the conclusion that both companies are in a position to protect and expand margins.

That doesn’t mean that much or any money should be diverted from the classic “Saul” stocks. Thus my prevarication and my large cash position.

Cheers
Qazulight (Note: Also holding a small < 5 percent postiion in TDMX)

11 Likes

We all are. Don’t believe anybody who says differently.

What we can do, instead of trying to predict the future, is come to reasonable conclusions about who is winning now, and change when that changes.

It’s harder done than said. But it’s easier than predicting the future.

That’s why I waded into this thread. I don’t believe future-proofing is possible. So make the most of the present!

Bear

42 Likes

Going back to Jason’s quoting of DDOG’s CEO, Oliver Pomel, I’m not sure how disappointed I should be. Maybe I’m not the kind of investor he’s targeting.

For instance:

That’s not the long term view I want to hear.

First, he’s saying AI will make DataDog’s customers more productive, so there will be more apps to monitor with DataDog. True, but that’s just an expansion of the TAM that affects all companies, including DataDog’s competitors. It’s not a strategic advantage for DataDog.

Second, and more concerning to me, Pomel’s saying that increased developer productivity will lead to more “fixing” - Really? This is exactly backwards in my view. AI isn’t about writing more code, it’s about writing code more efficiently - and that includes the debugging side. AI will help developers write code that doesn’t need as much debugging and will help that code be more efficient from the get-go, which means LESS need for DataDog!
Today - strike that - Yesterday, smart developers didn’t set out to optimize everything they wrote. They’d write it to function properly and to be clear for future maintenance, and then they’d try it out in sandboxes and such to see where the bottlenecks actually were and then optimize those. That way you wouldn’t spend human developer time optimizing code that didn’t matter. But, today, you can let the AI optimize most of your code since that costs so little. If anything, that should be less need for monitoring/fixing since the AI has done a lot of that already.

But, third, and more important to me for the “long term,” is that DataDog itself needs to incorporate AI throughout its products - but Pomel doesn’t talk much about that. Here’s one reference:

When he says “train models,” is he talking about NN ML models? I would hope so, but he isn’t clear on this, the most important long term aspect for DataDog’s continuing success. DataDog will need more than just charts and graphs and programmatic models that identify compute, memory, or network bottlenecks, it’s going to need AI capabilities that understand where those bottlenecks are and what in the code or hosting environment needs to change.

Perhaps even scarier are his shorter term views:

Maybe I’m wrong - and if so someone please correct me - but this strikes me as silly. It takes a bunch of work to instrument your code with DataDog’s APIs so that it can monitored. Potential DataDog customers have to weigh the cost of this instrumentation work against the benefits to be derived from DataDog’s products. So in terms of attracting future customers, DataDog has to show a higher ROI on the instrumentation work than its competitors, just like today. And I believe that ROI will more and more depend on AI monitoring capabilities. That’s something Pomel isn’t talking about - or did I miss it?

If DataDog doesn’t start leading with AI interpretation, the AI service providers will find other monitoring services that do. Heck, they might even dogfood the data into their own AI!

Again, Pomel talks about AI increasing the market for observability, but not how DataDog is itself adopting AI to do better, more efficient observability.

Now I’m really scared. Pomel doesn’t appear to understand that the increased developer productivity provided by AI tools like Code Pilot don’t just help developers write more code, they help developers write better code. Less bugs. More efficient. Less need for the in-situ monitoring that is DataDog’s bread and butter.

That he keeps saying this in different words as both short and long term for DataDog now has me thinking that DDOG will not be a long term hold for me. I’m going to start looking at perhaps selling my position for a company whose management actually gets AI. My only concern is timing. Am I too early on the AI observability capability cycle? It sure seems that Pomel is setting DataDog up to be disrupted, but will that be next quarter, next year, or 5 years out?

18 Likes

I think you are missing something.

We are already seeing people like me, I
know just enough to know how to get to Github, download Python and can get a compiler for C. Not much more.

Yet this class of people that do not even rise the rank of novice hackers are using prompts generated by Auto GPT, Chat GPT bridged to the internet with some Python Code, and then putting those prompts into Midjourney via a Discord interface to produce art. This may sound like unimportant child’s play, but to content creators, it is a huge deal. You with just maybe an hours work could produce a nice piece of art work that properly expresses your opinion of my post. Think giant hands clapping or a nasty tongue blowing a huge raspberry.

What is more, a mid level policy analyst could use AutoGPT to produce a Python script to scrape internal emails and place the data into a comma delimited format and the into an excel spreadsheet where he could create another visual basic script to format and analyze it.

He could do this with just the information he gained from “Python For Infomatics” and a some visual basic training off of the Microsoft training website.

Now security has a problem. Hacked together tools are running automated routines and these tools are not optimized or even screened for security holes at all.

In this case Datadog becomes more important, much more, not less.

Cheers
Qazulight

16 Likes

Coder here, and maybe I’m cynical after a few decades, but I just don’t see this happening. Yes, AI may help some developers write code more efficiently, but somebody still has to describe to the AI what they want, and they had better be thorough. I won’t get into details, but my job centres around coding to implement policy decisions in a financial environment. This can be really tricky, involving legal teams diving into the implications. These teams do not always agree, and even once agreement is reached, the devil, as they say, is in the details. The kind of tool we are calling “AI” can’t make these decisions, it can’t be adequately trained on prior policy, it really has no valuable input at all. Never mind that new policy has no precedent for it to be able to make decisions, it would have to understand the full context of every data field in play, and not just how it relates to the current process, but also any other processes which use those fields. You’d have to spell out which database fields to use under which conditions, what formulas to apply, etc. Until you have an AI that is actually involved in the legislative process (decades away I think, if ever), it would be a lot faster to just type it yourself.

There is a lot of hype about AI and what it can do for coders, but I think 90% of us (maybe more) aren’t involved in flashy new stuff that can benefit from an AI. Where it can be applied, it will be a handy new tool, but at best it will just free up resources to build more things. We’re already on this road. Almost every popular language has a healthy and robust 3rd party market, with apps and tools and add-ons that make everybody more efficient, this is just the next one (where applicable).

But despite all those boosts over the years, somehow we are never done! New tools don’t mean you reach some mythical end goal faster, it means someone will have a bright idea that is now within the scope of your capabilities and timeline. And that will come with bugs, as a matter of course. There might even be more bugs…certainly at first until we learn how to use AI appropriately.

So I don’t see the need for DataDog going away any time soon. If anything there will be more for it to do. As ever, just MHO…

34 Likes

The difference with both these responses and what I’m talking about lies in how and where the AI is applied. I’m not talking (yet) about AI that creates a whole program by itself from a program manager level description, or even from a technical product specification (although both will eventually be possible). I’m looking at AI as a coding assistant, like new versions of Github’s Code Pilot:
https://github.com/features/copilot

If you’re familiar with Pair Programming techniques, which have been around for decades, this enables the AI to act as an experienced programmer looking over your shoulder, correcting mistakes as you type code in, and highlight possible optimizations to be performed. This is at a different level than existing editor-level tools.

So, I don’t buy the premise that AI will lead to more problematic code. And even if I did, DataDog itself will have to employ AI under the covers to help find, isolate, and solve run-time problems. I didn’t see that DataDog’s CEO talked about that to any significant degree, and that’s a red flag for me.

7 Likes

I think that’s exactly what it won’t do, other than maybe preventing juniors from making basic errors. My point is the context of each business is likely to be unique. The learning model for AI is reliant on data volume, but each business context is a sample size of one.

It depends how the AI is deployed. There’s nothing worse for resolving a problem than to find the code trail disappears into a black box you can’t view or monitor. Right now that’s what “ChatGPT” is like, not even the creators know how or why it responds the way it does. I think for most businesses, especially those on the hook for policy implementations, that is a non-starter.

But for the rest, I agree, DataDog should be implementing AI as part of their tool suite. However, it’s still early, there is a lot of irresponsible hype, I’d actually prefer a company to keep their cards close to their chest at this point.

15 Likes

My personal investment thesis to Datadog’s product, on top of their growth metrics, is that it will scale proportionally to the total number of web applications running in the world - I see having more AI tool or less AI tool irrelevant to this. With the GitHub CoPilot example (which I’ve been using for 6 months and my company just adopted company wide), you still need to collect metrics from 100 machines if you are exposing 100 machines to the internet - this is true regardless whether the company has 1 developer or 100 developers. If anything, having less developers mean they will need better tooling to diagnose issues when customers are seeing a degradation of user experience like “why am I stuck on my checkout page.”

To me, Pomel’s (or any of their execs) stance on AI adds or subtracts to my investment thesis. It is simply not their core product. If AI is the car (vs carriage), I see Datadog as the builder of roads and bridges. I am betting on the continual growth of cloud applications - which will only increase because of wide adoption of AI.

19 Likes

Interesting topic - recently there are a lot of discussions around the impact of AI. Maybe we should start a separate thread about it… if I have time tomorrow, I’ll write some stuff down that‘s floating around in my mind.


But first something about Datadog:

This is an article (written 2021) from Julien Delange, CEO of Codiga, a company that Datadog acquired recently:

(Codiga is a code analysis tool that helps developers write better code faster. It is monitoring in real time, detecting and helping fixing errors.)


Julien Delange (Codiga):

- Machine Learning will help you produce better code -

Machine learning is going to replace the software developer or at least help you produce better code, and in a not-so-distant future, write complete programs. And there is absolutely no doubt about this.

Some early players (tabnine, kite) released products that are AI assistance for coders, mostly as a smart autocomplete. IntelliJ (one of the most popular IDE with VS Code) already embeds a Machine Learning system for code completion in their IDE. But none of the existing players generated fully functional code. In that sense, Copilot is a major step in the direction of automating code generation.

When making a machine-learning based product, the accuracy of the model (and quality of the recommendations) depends on the amount of data available to train it. And GitHub has the best position on the market to get a massive amount of training data since it is the primary platform to host all source code (GitHub has more than 50% of the market for Git hosting and is the primary platform for open source projects).

Takeaway: like it or not, machine learning is entering the developer tools space. And it’s here to stay.


This was in 2021! So to me it seems Pomel and team are VERY aware of the situation and the strategic positioning that is necessary.

Don‘t see anything that would indicate they are falling behind technology wise.


Question: when you say - „Datadog should implement their own AI“ or „AI will only increase their growth“ - what do you mean exactly with AI? Are you referring to generative LLM‘s like gpt?

I have the feeling AI is used too much like a buzzword, but maybe I just have not enough knowledge and you guys seem to know a lot about it. Can you explain what you mean with it specifically?

Hannes

10 Likes

I believe you’re underestimating not just what AI will do soon, but even what many AI coding tools do today. They are very much professional coding tools and do more than the simple syntax or uninitialized variable identification that was prevalent in editing tools just a couple years ago. Things have really changed in the last couple years.

The AI tools produce code that is no more or less a black box than human-developed code. If anything, what the AI produces will follow formatting, naming, and other guidelines rigidly, so it might even be easier to understand. I do understand and agree that complex problems will need to be extremely well and detailed defined for automatic AI generation to be applied, but there are today a large class of software development areas which an AI coding assistant can greatly increase a human developer’s output and quality.

I disagree because the code produced by fewer developers using AI will be better than the code produced by more human developers, and so less likely to get stuck on the checkout page. And then, DataDog better be using AI to diagnose why the app is stuck - because chances are it’s not some simple error (the AI dev tool would have caught that). If DataDog doesn’t adopt AI for its product some other company will do that for their monitoring product and take away its business.

I suspect DataDog’s CEO agrees with your analogy, but I reject it outright (AWS/Azure are the roads, and DataDog doesn’t build them. If anything DataDog are the intersection cameras and in-ground sensors that monitor traffic to find traffic jams) . While more AI will create more apps to monitor, AI creates better apps that need less monitoring, so the overall monitoring load will be both reduced and yet more complicated since the coding AI tools will catch many errors up front so they don’t occur anymore.

This means DataDog will have to be more sophisticated. AI can help with that, but I’ve seen nothing to indicate that DataDog’s CEO is aware of that.

14 Likes

I am not a programmer. I am one that mostly builds ad hoc tools to solve local peculiar problems. Often I have spreadsheets that start out with nice headers and formatting then devolve into small little chunks of random math.

Thenwhen things get serious and I find myself doing repetitive work I automate it with simple macros. Then when that gets tedious, I hack the visual basic to get it where I want it. This does not happen often. The problem is that sometimes it is good enough even though truly buggy, that other people want to use it the macros. Now these buggy unsecured macros are flying around the company.

This typically is not a problem, but now the large language models and the rapid evolution of them is allowing people like me to write pretty decent python code that can grab data and hand it off to Excel and Access then the same Large Language Model can write visual basic code to work with the data in the spreadsheet.

I am not talking anything super awesome, I am talking about quick built tools by non programmers built to help automate tedious work on a single defined project or problem. The problem with these tools is that no one actually understands the code and no one who might understand the code even knows it exists and is crawling around the network.

I don’t see how network security can lock this down. All I see that they can do is look for anomalies in data trends created by this automated code.

import requests
import xml.etree.ElementTree as ET

Define the API endpoint and parameters

endpoint = “https://www.sec.gov/cgi-bin/browse-edgar
params = {
“action”: “getcompany”,
“CIK”: “0001639825”, # Datadog’s CIK code
“type”: “10-K”, # Financial report type
“count”: “10” # Number of reports to retrieve
}

Make a GET request to the API

response = requests.get(endpoint, params=params)

Parse the response XML and extract the links to the reports

root = ET.fromstring(response.content)
ns = {“atom”: “Atom Syndication Format namespace”}
links = [link.attrib[“href”] for link in root.findall(“.//atom:link[@rel=‘alternate’]”, ns)]

Print the links to the reports

for link in links:
print(link)

This is the code that I asked ChatGPT to write to pull the last ten financial reports from the Edgar data base. Chat GPT also advised me to request and API Key and perform a couple of other tasks.

From this point I could do a little more work and pull data from other places, like the companies mentioned by certain analyst on public websites, then pull all those reports, then ask the data to be imported into Excel.

This would probably take me a day to do, and ChatGPT is crippled compared to Bing. Imagine a company with 200,000 employees all of which have decent laptops and decent internet connections that can be internal, by connecting the VPN or external, by turning it off. Imagine that say 10,000 of these employees are motivated enough to build ad hoc tools once a month. That would leave your network running 120,000 new untested programs in a year.

I would think that the data flows would take a lot of monitoring.

It is like a bunch of drunk and hung over people driving around lost. This is something I experience all summer long. What a mess!

Here is the prompt and response. Remember ChatGPT has already been superseded by AutoGPT and Bing.

Write python code to pull the last ten finacial reports for Datadog from thd Edgar database

ChatGPT
As an AI language model, I do not have the capability to directly access the Edgar database or execute code. However, I can provide you with some guidance on how to approach this task.

To pull financial reports for a specific company from the EDGAR database, you can use the SEC EDGAR API. The API provides access to financial statements and other filings made by companies to the Securities and Exchange Commission.

Here is an example Python code that uses the SEC EDGAR API to retrieve the last ten financial reports for Datadog:

makefile
Copy code
import requests
import xml.etree.ElementTree as ET

Define the API endpoint and parameters

endpoint = “https://www.sec.gov/cgi-bin/browse-edgar
params = {
“action”: “getcompany”,
“CIK”: “0001639825”, # Datadog’s CIK code
“type”: “10-K”, # Financial report type
“count”: “10” # Number of reports to retrieve
}

Make a GET request to the API

response = requests.get(endpoint, params=params)

Parse the response XML and extract the links to the reports

root = ET.fromstring(response.content)
ns = {“atom”: “Atom Syndication Format namespace”}
links = [link.attrib[“href”] for link in root.findall(“.//atom:link[@rel=‘alternate’]”, ns)]

Print the links to the reports

for link in links:
print(link)
This code sends a GET request to the EDGAR API with the appropriate parameters to retrieve the last ten 10-K filings for Datadog. It then parses the XML response and extracts the links to the reports. Finally, it prints the links to the console.

Note that you will need to obtain an API key from the SEC to use the EDGAR API. You can find more information about the API and how to obtain a key on the SEC’s website.
—————————————-

Now I would have to actually work to get this work as I would need a list of target companies, then once I had the data I
would need to have the model write code to clean it and organize it.

Finally it might be more valuable to grab the none financial data, data like the members of the board of directors and the top three levels of executives and see what companies they have been involved with in the past. That would reveal
an interesting web of connections. What is more, a simple technician with a little reading could pull this off with few weekends of work. Something that was impossible 3 months ago.

Cheers
Qazulight

13 Likes

Does this please you @Smorgasbord1?

Bear

14 Likes