May 12, 2025
Trending News

Latency Explained: How Close Should Your Data Be?

  • March 4, 2024
  • 0

The shorter the distance the data has to travel, the faster it will be available. Ideally, keep your latency low. Or can’t it hurt to have to wait

Data latency

The shorter the distance the data has to travel, the faster it will be available. Ideally, keep your latency low. Or can’t it hurt to have to wait a millisecond longer?

Data is your company’s most valuable asset: it has already become a cliché. However, because data is not always actively used, companies store it securely in data lakes, data warehouses or other forms of storage, physical or in the cloud, until it is needed again. Retrieving this data can require a bit of patience, and in today’s busy world we are always short of it.

A phobia of it has developed latencyor the delay that can occur during data movement. Why does this delay occur and how can it be kept as low as possible? And is it necessarily a problem?

How does latency arise?

Latency literally expresses the time it takes for data to get from point A to point B. Your device sends a packet to a storage server requesting specific data. This server processes your request and then sends the packet back. There are many external factors that can affect data transit time.

The first obvious factor is the distance between the storage location and the final destination. Data stored in a Belgian data center should be available a fraction faster than if your data were stored in, for example, Frankfurt, London or New York. This is a popular selling point for local providers to attract customers, and one of the reasons why Google and Microsoft are investing heavily in data center infrastructure in Belgium and the Netherlands.

Latency is closely related to throughput and bandwidth on a network, although the terms should not be confused with each other. While latency is a measure of time, throughput and bandwidth measure the amount of data on a network. Throughput is the amount of data transmitted through the network at a given time. Bandwidth is the maximum amount the network can handle. Throughput and bandwidth therefore have a direct influence on latency.

The laws of physics

Even the most advanced networking technologies cannot eliminate the impact of distance, explains John Engates, field CTO at Internet service provider Cloudflare, in writing. “Even though data travels at close to the speed of light, geographic distance can still cause noticeable delays. Especially when data needs to cross continents or oceans or travel back and forth in space via satellites. Often this delay is simply a physical problem and it is difficult to go against nature.”

The resources available to bridge the distance play an equally important role. A state-of-the-art data center equipped with miles of fiber optic cable can get your data back to you much faster than a server in a remote location with little or no connectivity. Compare it to traveling in real life: If you can drive on well-maintained highways, you will usually reach your destination faster than if you have to take bumpy country roads. Even if the distance on the highway is several kilometers longer as the crow flies.

If you extend this comparison, you will also understand why network traffic has an impact on latency. The more cars there are on the highway, the greater the risk of traffic jams, which increase travel time. A server overloaded with requests therefore needs more time to process all requests. Finally, network configurations, protocols and the ability of routers and servers to process large data streams also play a role. Latency is the sum of many factors.

Latency is often simply a problem of physics and it’s difficult to argue against nature.

John Engates, Field CTO Cloudflare

Back and forth

There are different methods for calculating latency in a network. The most commonly used unit of measurement is return time (in English). Time for a tour or RTT). The chronometer is used almost literally to calculate how long it takes to send data to a server and send it back to the device. Prefer others Time to First Byte (TTFB) as a benchmark, where the chrono is stopped as soon as the first bits of data have reached the final destination.

In principle, any experienced IT user can test the latency themselves by carrying out a “ping test”. You don’t need any special tools for this: the Windows command prompt is sufficient. A ping test sends four test packets to a host server or computer to test availability.

To learn more about why latency occurs, run a Traceroute out of. You should think of this as GPS for data. A traceroute records the path the data took so you can see where the delay occurred. Based on Real user monitoringThese tools help you measure how latency impacts application user experience.

  • Run a ping test in Windows

    Open the Command Center in Windows using the search bar or by first opening the Run window with the keyboard shortcut Win + R and then CMD to type in. Give the command now Ring followed by the host’s IP address or domain name (e.g. itdaily.be). The latency is the value that appears in the output afterwards Time:. A final report is then generated with minimum, maximum and average latency.

    You get Request timed out then your package was lost on the way. This can indicate connection problems on your side, but also on the host server. Ideally, the loss rate is zero percent.

Fortunately, there are tricks to limit latency. Many web applications also use a CDN for this Content delivery network. A CDN caches static content on a website. The CDN servers can be spread across multiple locations to retrieve the data as close as possible to the user so that they can see the content more quickly. A CDN is also not a holy solution because you cannot house “dynamic” data like blogs in it.

Every millisecond counts

In many cases, the latency is only milliseconds (ms) and is barely noticeable to the naked eye. Then why is it a problem? There are many situations where a millisecond makes a big difference. A good example is a self-driving car that has to brake for a crossing pedestrian: even a thousandth of a second delay can have catastrophic consequences. In robotic surgery, it is also desirable that the robot performs each action at the exact moment the surgeon inputs it.

Need more examples? Consider cybersecurity: If suspicious activity is occurring anywhere in your systems, you want the SOC to be notified as quickly as possible. Every millisecond gives the intruder a decisive advantage. Another example is the fight against fraud by banks: there is only a very short period of time to stop a suspicious transaction, and then the latency is just as damaging.

It doesn’t always have to be exciting to show why latency shouldn’t be underestimated. Web developers get shivers all over their bodies when they even hear the word. Buffering web pages causes visitors to drop out, and Google isn’t afraid to penalize websites for it. For most Internet applications, latency is only “visible” when the delay is more than 100-150 ms, or a tenth of a second.

Faster than light: is zero latency a myth?

Providers are therefore happy to respond to this fear of latency. No latency And Real-time data are trendy marketing terms these days, but are these promises realistic? A latency of 0.0 ms seems impossible according to the laws of physics: the data would then have to move faster than light and that is far from possible even with the most modern network technology.

Engates also agrees: “Zero latency is more of an idealized concept than a technically feasible reality.” Technological innovations and optimizations of network protocols are primarily aimed at limiting latency to the lowest possible values. This improves the user experience of applications that require real-time interaction, even if true zero latency remains elusive.”

In recent years, of course, great strides have been made to bring latency as close to magic zero as possible. With 5G it is possible to reduce latency to 1 millisecond (under optimal conditions), which is perceived by the human brain as real time and which also represents a significant advance compared to 4G, which has an average latency of 30 to 50 milliseconds. has milliseconds.

The rabbit in the hat of Wifi 7 is multi-link operation (MLO). This technology enables the simultaneous transmission of data packets across the three available frequency bands. This is intended to help prevent overloading of any of the frequency bands, which not only benefits speed but also stability and latency. “Finally, edge infrastructure also offers a significant advantage in terms of latency, as data is processed as close to the source as possible and does not have to come from a cloud provider’s data center,” adds Engates.

How close should dates be?

Latency therefore seems to be a perfect advertising message for the Edge. Data can stay where it is needed, which is an interesting requirement, especially for business-sensitive data. This is offset by investments in local server and network infrastructure to make this data available. The security of the edge data is then entirely the responsibility of the owner.

Where and how close data should be located depends on who needs the data and for what purpose. For workloads that run better locally, it is more interesting that the data comes from a server that is as close to the machine as possible. When it comes to data that multiple teams in the company need to access simultaneously or cloud-based workloads, you benefit more from the accessibility that the cloud offers.

As with any IT problem, there is no solution that will work for every situation. Sometimes you need data immediately, but often it doesn’t hurt to wait a millisecond. Consider latency in the context of a single use case and determine where data lives from there.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version