Disk has always had a very tough challenge with random data access because the drive heads can only be in one location at a time
Usually!
Back in the late 60s when I was working on the Illiac IV project, we were introduced to a disk subsystem by Burroughs which had a head per track, so the only latency was the rotational latency. One imagines it was a wee bit expensive, though.
Databases run like scalded cats on flash-based systems
Absolutely … provided that the SSDs are local to the database.
In addressing network latency, the networks that support storage are either high-bandwidth Ethernet (10GbE and above), or FibreChannel (designed as a very low latency protocol specifically for connecting storage arrays to servers). Network latency, as a rule, is very, very low, measured in nanoseconds (whereas storage latencies are microseconds or milliseconds). Usually, network latency is far less of a performance detractor than the storage media or the application itself.
This does not match the experience of my sources. In particular, they note that it is not the latency of an individual component which matters, but the end to end latency. This includes the overhead of device drivers. As I believe I previously noted, the DB with which I work will support both shared memory clients, where the client reads and writes directly to and from the buffers of the server, and remote clients connected via TCP/IP. It is possible to run the remote clients on the same physical machine as the server so that requests and responses go down and up through the TCP/IP stack, but without any network actually involved. Those clients are substantially lower performance than the shared memory clients.