The investor presentation deck was refreshed again in December.
There is a revealing piece of information on slide #4. Take a look at the second bullet with the asterisk:
3 years > 20% YoY revenue growth (2012-2015)*
Following where the asterisk leads:
* On track for greater than 20% growth in 2015, per analyst consensus
The presentation was updated in December. If they knew enough to tell us they were on track in December with at most 3 weeks left to go (their quarter officially ended on 12/26)… well then, chances are pretty good we’re about to get another year of >20% YoY rev growth just like they said they would. Mission accomplished! Huzzah, we did it! Go take the rest of the year off! (Did I mention they told us in an investor response they really did take the last week of the year off?).
Back to the deck, Slide #13 contains another reveal. The slide is titled “Cloud Architecture Drives New Traffic Patterns”. It tells us that the server to server traffic is growing faster than server-to-user traffic. For illustration, it shows for every one user transmitting a request through the network there are up to 930x more interconnections spinning around through the datacenters. Also in the diagram is a Facebook logo on the “datacenter” side, and in between all the distributed datacenters is the lettering “DCI”, which implies very strongly that Facebook is using Cloud Xpress.
To confirm the above, I found another article describing the bandwidth comparison, this time in written word and back when Cloud Xpress was still in trials.
A process called transaction magnification provides a clear example of why high-capacity data center interconnection matters for operators deploying this kind of infrastructure. Facebook’s applications provide a good example of how transaction magnification works. Facebook analyzes its processing workloads regularly. One recent measurement showed that a single 1 KB HTTP request spawned > 35 additional data base lookups, >300 related backend RPCs, and a >900x increase in bandwidth for machine to machine communications within its data centers compared to the traffic traversing the user-facing parts of its networks. When the applications are distributed across multiple data centers in a given geographic area, it is evident there is a need for efficient and high-capacity interconnection between those sites. This new model of distributed computing across multiple data center buildings will only continue to grow.
Here is my theory. Facebook was in their trial program back when Cloud Xpress was introduced. Two pieces of information tell me that. The first is from the example itself. The second piece is from the bottom of the Telecom Review article: “The Cloud Xpress is currently in customer trials, orderable now and planned for general availability in December 2014.”
You’re in customer trials and you need an example for an article. Pick a scenario from one of your customers and use it. Ok, check.
And now, seeing Facebook referred to again in the presentation, this time in diagram form… by now they must be an actual CX customer. If they weren’t, why make a diagram out of the information to use again in your investor presentation? Why?? Because it’s easier to use something you already know. My spidey sense tells they’re using CX in addition to DTN. Unofficially of course.