It’s important imo to include some detail regarding the word “network” in the context of $ANET and/or $NVDA
There are lots of networks in the chain of events starting at an end-user request and ending in a response. For example in the case of a browser residing on a computer inside a typical home residence; to grossly over-simplify and skip over a lot of stuff:
- There is a Local Area Network (LAN) physically inside your wireless router: a switch, a router, software etc.
- There is a “backbone” and/or “Wide Area Network” (WAN) that connects your wireless router to your ISP
- There is a backbone and/or WAN that connects the ISP to a web-proxy device of some sort (Load Balancer /router/Web Application Firewall/probably a combination-device) in the De-Militarized Zone (DMZ) at a dataceter
- There is a network that gets the user request from the DMZ to a Virtual machine (VM), physically inside the datacenter e.g the VM that hosts the listening process (HTTPS) for the requested website. This is a datacenter “front end” network; it is typically implemented with Ethernet, and is $ANET’s strength currently.
- The middleware handling the relevant business logic eventually makes a request of a database: a relational or non-structured database, or a AI inference capability. In any case, there is a database, and for performance/scalability/redundancy purposes, the database is deployed across multiple physical nodes e.g. a “cluster” of nodes. The database nodes are physically connected by a “back end” network, typically implemented via InfiniBand; this is $NVDA’s strength currently.
.
Here are some requirements and characteristics of the “front end” network":
- Needs to connect LOTS of heterogeneous hardware devices
- Each device can have LOTS of processes listening for requests.
- …Thus, there is a lot of complexity to stand up, configure, re-configure, maintain, secure and troubleshoot the “front end” network.
- Requests and responses are aggregated/multiplexed/bundled so that relatively huge numbers of disparate requests from logically separate functions can be sent down the same physical media
- Latency and throughput are important, but arguably configure-ability, flexibility, visibility, heterogeneity etc, are equally as important; thus, Ethernet is used.
.
Here are some requirements and characteristics of the “back-end” network:
- Needs to connect, relatively-speaking, VERY FEW, HOMOGENEOUS hardware devices.
- Each device has, relatively-speaking, VERY FEW processes listening for requests.
- Traditionally, once the back-end network is set up, it is rarely touched or changed subsequently.
- The data sent typically has to do with a VERY SMALL number of specialized functions (e.g. database transactions and/or queries)
- Latency and throughput are of primary importance; thus, Infiniband is used.
- The back-end network is dedicated to a relatively small number of hardware devices serving a single dedicated purpose e.g. a database.
What’s interesting to me is that $ANET, to this point, has not been able to establish a presence in the back-end network, but THEY HAVE AMBITIONS TO DO SO. They plan on releasing devices that utilize an upgraded version of Ethernet which (…they hope) will beat Infiniband on latency, and also beat Infiniband in terms of flexibility, visibility etc. $ANET is working with their customers, and the relevant standards organization, to define the characteristics of this to-be Ethernet upgrade.
IMO it’s a good example of how leadership at $ANET is constantly looking to expand revenue.
I’m long $ANET and $NVDA; small positions in them both.