Understanding Latency

Latency refers to the round-trip time for a request—specifically, the time for the initial packet to reach its destination, for the destination machine to reply, and for that reply to reach the requestor. Every network has latency. The total amount of latency that you’ll get when connecting to a given remote host can vary widely, depending on network conditions.

Assuming your network connection is not overloaded, the bulk of a connection’s latency comes from the laws of physics. The minimum latency between two points on the earth can be calculated by dividing the distance by the speed at which light or electricity moves in a particular medium (which is usually a large fraction of the speed of light in a vacuum, c).

For example, consider a packet traveling round trip from New York to San Francisco (about 2,900 miles, or 4,670 km):

These calculations represent an absolute lower bound for the latency of a connection through those media. There are several other factors that can add additional latency on top of the link latency: