Introducing $ALAR (Alarum)

Thank you, Saul, for accepting me onto the board. This is my first post, and I debated whether it was worthy enough to share. While this stock doesn’t check all the “Saul” boxes yet, its delivering impressive numbers

By way of quick background, this is my third year in hopefully a long and prosperous journey. I am far less sophisticated than the rest of you, but I approach opportunities with a similar mindset to this group. For context, my top five positions are: CRWD, MELI, AXON, MNDY, and ZS.

Alarum Technologies (ALAR), traded on the TASE, pivoted in 2H2023 to focus on its NetNut product (and divest everything else), a proxy network service for data scraping and AI data collection. The explosive interest in AI and LLMs further supports the need for vast and accurate data collection.

Key Financial Highlights for NetNut Specifically

  • Q1 Revenue: $8.1 million, up 138.2% Y/Y and 14.1% Q/Q
  • Net Retention Ratio: Q3’23 of 144% → 155% → 166%
  • Q1 Gross Margin: 78%
  • Operating Margin: 31%
  • Enterprise Value: $207 million

My commentary: Obviously, some strong financial metrics, albeit a bit less impressive at the size of the company which is crumbs to most of the companies on this board ($10B+). Notably, opportunities for companies like this hardly exist as they stay private much longer.

Screenshot below of their Q1 2024 financials. Note: NetNut revenue at $8.1m is now 96% of their revenue
alar 1

For me, the complexity in their financials which only recently reflect just NetNut is both a challenge but also presents some of the opportunity. I find companies that have something more interesting happening under the hood than simply the reported financials as intriguing (I.e. Axon Cloud growth and transitioning to SaaS). However, I’m focusing just on NetNut financials in my analysis.

  • Revenue
    • 2021: $6.3m
    • 2022: $8.5m (up 35%)
    • 2023: $21.3m (up 150%)

NetNut Product and Offerings

NetNut’s Robust Network: The foundation of their IP Proxy Network (IPPN) solutions, providing secure and anonymous browsing with high reliability and speed. NetNut operates a network of over 52 million residential proxies, enabling extensive web scraping capabilities. This is the main product. The rest of the lineup has been launched in the past year.

Some of their suggested use cases:

alar 3

Market Opportunity

Driven by AI tailwinds, the data collection and labeling market is growing very quickly. According to Grandview Research, the data collection and labeling market is expected to grow from $2.2 billion to $17 billion from 2023 to 2030, or a 28.9% CAGR

Competitors

Although I am not an expert in NetNut’s technology, it’s evident that they face strong competition. Bright Data is the clear leader in the field, with Oxylabs and Smartproxy also serving enterprises and likely being larger than NetNut. Unfortunately, since these companies are private, we lack detailed financial insights. I understand why one might avoid investing in NetNut, as “Saul” companies are typically first-movers or market leaders, which NetNut does not appear to be at this point.

Additionally, it is unclear to me how strong NetNut’s competitive advantage, or “moat,” really is

alar 4

Customer Concentration: Revenue in 2023 of $3m resulted from one main customer (representing 12% of total revenues).

alar 5

Conclusion:

The financials and valuation prompted me to take a chance on this company back in March. Now, with the stock up 20% over the past month, the Forward PS is 6.4x. During the Q1 earnings call, management mentioned that they do not anticipate any slowdowns in their business for Q2. While the new Data Collection and AI products are not expected to contribute significantly to revenue in 2024, they are projected to attract loyal customers and increase revenues in 2025. Given the impressive revenue growth and net retention rates, I’m willing to continue this journey.

Would greatly welcome any questions or thoughts from the group?

Long $ALAR 5.6%

(Not Financial Advice)

P.S. I apologize for any poor formatting.

30 Likes

This could be a bit OT, but a part of that description really bothers me… That website unblocker functionality is deliberately designed to ‘steal’ data from sites that are blocking bots and scrapers. It also BYPASSES captcha level security!!! (Granted, that is a very low level security, but still.)

WHO gave these guys the right to basically hack into webpages and steal data that the owners have worked to keep private? Why do they have this ability and how is the rest of the internet world putting up with this?!?!

(I know, I know…money, for the better good, because it has to… All the same reasons any robber baron used, or any corrupt gov’t group has used.)

Yeah, that part really bothers me. This would instantly dissuade me from holding any shares.

19 Likes

Dlbuffy, thank you for sharing your insights. From my perspective, the primary legal risk and consideration here revolves around data rights and privacy. In the coming decade, we can expect numerous legal disputes concerning data ownership as AI and LLM companies increasingly access both public and non-public data for training their models.

Recently, Meta and X both sued Bright Data, a major competitor, alleging that its data scraping activities violated their terms of service. Meta eventually dropped its case because it could not prove that Bright Data had scraped anything beyond publicly available data. Similarly, X’s case was dismissed, with a judge ruling that creating information monopolies would harm the public interest.

I don’t believe that adding CAPTCHA and anti-bot software to a website inherently makes it illegal for a bot or scraper to access the data. While it may be ethically questionable, I still believe NetNut provides a net positive benefit to society and could probably argue there are other companies that we have invested in that are worse for society (i.e. $CELH)

Despite these concerns, I am willing to accept the potential legal risk for a company with a net revenue retention (NRR) of 166%, growth for three consecutive quarters, revenue growth of 138%, strong margins, and a very reasonable 6.3x forward price-to-sales ratio in an industry with significant tailwinds.

10 Likes

So it is wrong for a legitimate business to scrape public information? Bad actors do this everyday using a simple web scanner or vulnerability scanner or just a browser. Nobody can really stop them though it may not be legal. This is 20 year old technology with new tricks invented everyday. Hundreds of tools are available, many free. Corporations should be scanning their own web sites regularly to harden their servers. And prevent any proprietary data escape.

Web Application Vulnerability Scanners are automated tools that scan web applications, normally from the outside, to look for security vulnerabilities such as Cross-site scripting, SQL Injection, Command Injection, Path Traversal and insecure server configuration. This category of tools is frequently referred to as [Dynamic Application Security Testing (DAST)]. But these scanners also scrape up all kinds of data in the process.

Proper good actor behavior and security hygiene is to ask permission and/or inform the enterprise being scanned and scraped. I would assume NetNut has this process legally established especially being a public company. A strong server will alert and perhaps raise shields. But not always.

I think the major concern to me about NetNut is what is their secret sauce? I have not researched, does anybody know?

-zane

8 Likes

I agree. A better understanding of the technology/moat would help me gain more conviction in the company. Ultimately, the financials have been driving my conviction. For a better understanding of the technology, I would defer to: https://proxyway.com/research/proxy-market-research-2024 which has ranked all the various proxy providers. Some direct quotes:

“This year again Oxylabs remains one of the strongest enterprise choices. Bright Data too, though we feel like its attention has moved on to other things, and that our format doesn’t always capture the provider’s strengths. NetNut is advancing in strides, and it’s poised to become a serious competitor for the first two.”

“In our benchmarks, NetNut had the most IPs in the Global pool, and Smartproxy in select country pools.”

“The little squirrel packs a punch. The proxy networks we tried are large and perform well, aside from some timeout issues. And if you require ISP proxies, it’s one of the best developed options in the market in terms of features. Proxies aside, improvements are needed. This applies especially to the user experience side of things: documentation is lacking, setup instructions unfriendly, and customer support has limited working hours. There’s also the matter of pricing plans that start from hundreds of dollars and are no longer competitive in the entry range. That said, NetNut’s parent company has been shedding away its other properties to focus on the proxy business. This means more marketing budget, more products, and an overall push to grow. We’re already start to see new scraping APIs emerge from the workshop. Even now, NetNut is a terrific choice for scraping, market research, even social media automation – especially if you use thousands of gigabytes of data. Scale and flexibility are the company’s strong points, and they’re not to be underestimated. All in all, NetNut is not quite the highest tier provider just yet, but it’s surely getting there.

My main takeaway is this is a very competitive space with many players. It appears the overall industry is doing well so everyone is benefiting. NetNut is not the top player in the space (probably around #3-5), but is making quick improvements to catchup with the top players

5 Likes

It is NOT public information if they have to specifically work to get past captcha checks and avoid anti-bot rules on websites. Someone worked to block web crawling bots and they did it to avoid this. Not going to argue on this anymore, if someone does not see the inherent issue of having to break into something to get data for free…then I cannot change their minds.

10 Likes

Many times captha and a simple sign on is used simply to identify the user, not necessarily control authorization or insure data privacy. There is a lot of stuff that can be scraped this way. NetNut is a listed public company. I admit not to having done a deep dive on NetNut policies. But I have ‘few’ doubts they will behave properly with respectable security hygiene. Else, they will pay a big price in reputation, legal woes, and stock decline. Once they have permission to scrape, they will need those tricky tools to crawl the web pages and get around any blocks.

-zane

1 Like

Ugly day today down 22% on fears of increased competition. While I have voiced concerns over a lack of wide moat, their 166% net retention and strong G2 reviews, give me some confidence that this fear is overblown.

I’ve increased my position.

3 Likes

This from the investor section of their website (alarum.io),

“The solutions by NetNut are based on our fast, most advanced and secured hybrid proxy network, enabling our customers to collect data anonymously at any scale from any public sources over the web.”

From this statement it does not appear that they are violating any security protections. The data is collected anonymously from public sources. That doesn’t sound like stealing data to me.

Might it be that this statement is intended to obfuscate the actual intent of NetNut? Well, anything is possible, but I really doubt it. In addition, they have a number of competitors that are apparently engaged in the same or similar business producing the same or similar product. Are all of them in violation of security protocols and violating user privacy. I doubt that even more.

5 Likes

The company defends its data scraping practices here:

Businesses use websites as a dock for the information and data that power their systems. To keep the competition at bay, they use several anti scraping techniques to protect the privacy of their business and keep them ahead. If you are protesting this and feel everyone in the marketplace should freely compete, we hear you! This is why you can use the anti scraping bypassing techniques described above to help you beat the competition.

and

NetNut is a dedicated proxy server provider that has been working underground to make web and data scraping as hassle-free as possible. We have tons of proxies, APIs, and integrating systems that can help you bypass any network or website firewall. Get in touch with us today and say goodbye to the menace of anti scraping measures set up by companies avast to healthy competition in the marketplace.

To paraphrase ZeroWing: “All you data are belong to us.”

5 Likes

WOW! That is the most socialist statement I have read in a while. The whole idea that “shame on those companies wanting to stay competitive” and keep their own data to themselves.
This to me just increases my dislike of this company’s work and my current feelings against current AI initiatives. OpenAI is involved in a lot of lawsuits for many data collection/use/co-opting situations, and I would not be surprised if all AI companies are skirting morality here to get all the testing data they crave…

3 Likes

Thanks for your thoughts here! Just curious how you view data ownership on the internet.

There has been a strong precedent set by the Meta vs. Bright Data case, where the judge ruled in favor of the proxy network.

Meta has not presented evidence to show that this data was unavailable to the public and has presented no evidence contradicting Bright Data’s assertion that the data scraped was publicly available without logging in.

Here is the link if you want to read the entire case text.

The key difference here is whether the company is selling public or private data. I completely agree with your stance about private data. That is the company’s and should not be scraped and sold.

However, for public facing data, I don’t see how this is different than you manually going to a whole bunch of websites and taking notes. The proxy networks allow you to automate this at scale.

If a company wants to maintain strong SEO (search engine optimization) rankings, they need to make some data public. A judge has confirmed that public data is fair game to be scraped. If you want to post it publicly, that means other people can see it and therefore, gather that data.

If websites try to make their data private, that will worsen SEO and could create an ethical gray area for proxy networks. I totally understand staying away from this company but I don’t personally see the problem with gathering public (key word!) facing data at scale.

Best,
Fish

4 Likes

Can we move on from the back-and-forth over interpretations around legality/morality. One could argue there is tail risk associated with a singular legal-related event for this company and we should each interpret that and discount the risk appropriately.

This company frequently releases preliminary financials, so we can may gain insights into its Q2 performance within the next two weeks. Given that this company is in a hyper-growth phase, I am particularly interested in any information regarding new product launches and expansions in their sales team to support their rapid growth. Additionally, I am eager to see updates on their key metrics, especially their Net Retention Rate, which was an impressive 166% in Q1, as well as their overall revenue performance.

I continue to accumulate shares throughout this dip and it now a 6% holding for me.

2 Likes

Even public data can be put behind bot blockers because it used to be a huge drag on the websites and cost companies money for all the server time/power that was needed to host a website that was being over run with bots. It was a big issue when google was just starting up (yes I am that old) and you had google, yahoo, AIM, etc all trying to catalog the whole internet. They would actually take down websites because of the non-human traffic in the day.

I have never said that ‘publicly’ available data is the problem, so not sure if you are addressing me. Look back at the article links just above. Every bit of that is them bragging about how they can take anyone’s data, even proprietary company data from a locked down site. Using a website to share data with select people does not automatically mean it is in the public domain.

If I am paying for the data to be created, then for the data to be hosted, and then for that data to be maintained…it is my data. That is the way I see it. All of these AI companies are getting stuff for FREE that someone else paid money to get.

How is this not viewed as some free handout or welfare to these Billionaires and a big scr@# you to everyone that had to work to make that data. Why am I the one trying to explain right and wrong here. Taking other people stuff without permission has ALWAYS been wrong. (Yes, even eminent domain can be wrong and misused.)

4 Likes