[ad_1]
5G user experience is determined by speed, not much by latency
Smartphone user experience is all about time-to-content: after I click, how much longer do I have to wait before my video starts playing or before I can start scrolling on my news page? On 5G and 4G networks, time-to-content is largely determined by the up- and download speeds available to the device when a user “clicks”. This subject is covered in the blog post about peak download speeds in 5G, from which I have taken the lower chart in Figure 1.
The key message of this blog post is shown in the upper chart in Figure 1 and can be formulated like this: Latency hardly impacts smartphone user experience in advanced 5G and 4G networks.
If you believe us at Ericsson, then you can stop reading here.
Otherwise, please, read on. I will address some of the key aspects that are often not considered in discussions about latency. It may explain why some people arrive at different conclusions. I will also provide evidence supporting the statement in Figure 1 that latency rarely exceeds 50 ms in advanced 5G and 4G networks.
But first, the background behind Figure 1: both charts show the key results of an in-depth study we have conducted at Ericsson. We used high-end smartphones connected to the Internet in a controlled environment and had the smartphones request popular content (YouTube, Instagram, Amazon, eBay, Uber, IKEA, and many more) in an automated way. For the latency measurements, we have ensured that sufficient up- and download speeds were available to the devices at all times: an uplink throughput “at click” of at least 1 Mbps and a downlink throughput “at click” of at least 20 Mbps. The study was based on tools and guidelines provided by Google on web.dev. This way, we derived the relationship between time-to-content and latency, and time-to-content and up- and download speeds.
What is latency?
Latency is the time it takes for a device to send one small ‘echo’ packet to the serving content server and the corresponding ‘echo-reply’ packet to return to the device. This time is also called the round-trip time. It has become common practice to use the terms synonymously. Different tools are used in the industry to measure latency this way.
Just to be clear: latency is not the same as time-to-content. I highlight this because some people say or write latency but mean time-to-content. Not seeing the difference and not using the terms consistently is a common source of misunderstandings.
5G and 4G latency in the U.S. is usually less than 50 ms
We have analyzed more than 15 million speed tests collected by the Speedtest app provided by Ookla across the U.S. over a period of 6 months while the devices were connected via 5G or 4G. Every Speedtest sample includes the measured up- and download speeds and latency. The left-hand plot in Figure 3 shows the locations of the devices at the time the speed test was performed, one dot per speed test. The right-hand plot shows the locations of the Ookla servers towards which speed and latency were measured. The Speedtest app always includes the Ookla server that yields the lowest latency.
Figure 4 shows the latency results aggregated across the three major U.S. communications service providers. As can be seen, latency is lower on 5G compared to 4G, it’s the lowest when 5G is operated on millimeter wave, and the vast majority (97.6%, 93.1%, 89.0%) of all speed tests have measured a latency of less than 50 ms.
It is important to note that the charts in Figure 4 only include Speedtest samples with sufficient speed: an upload speed of at least 1 Mbps and a download speed of at least 20 Mbps. More on this below.
Your favorite content is likely close to you
Today, virtually every company with an online presence that cares about user experience when their content is accessed has signed up with a content distribution network. The reasons for doing so are usually commercial: secure brand perception, secure customer retention, increase conversion rates in retail, push more ads, etc.
This means that the content behind the “click” on your smartphone is most likely served from a nearby edge server. For example, if you click www.ericsson.com while you’re located in San Francisco, USA, your device will connect to an Akamai edge server located in San Jose – 70 km south of San Francisco – to fetch a copy of the content behind www.ericsson.com from there, and if you’re located in Seoul, South Korea, then your smartphone will connect to an Akamai edge server located in Seoul to fetch a local copy from there.
The same applies to popular video content such as YouTube and Netflix and also to gaming content. Hence, in advanced 5G and 4G markets, your smartphone pretty much always connects to a nearby edge server.
Edge servers are typically located at the Internet peering points of a 4G/5G network. And this is also where Ookla’s Speedtest servers are usually deployed. This is why it should be a fair assumption that the latencies shown in Figure 4 are a good approximation of the latencies between U.S. based 5G smartphones and the edge servers serving the users’ “clicks”.
And what about gaming and latency?
Almost every time I show the charts in Figure 1 to Ericsson customers, I’m asked about gaming. And, whenever I talk to dedicated first-person shooter gamers – like my son and his friends – I always get roughly the same answers: A reliable latency of…
- 30 ms is considered to be great
- 30 – 50 ms is good
- 50 – 100 ms is acceptable if there is no other choice
- > 100 ms is unplayable
The key word is ‘reliable’. Ambitious first-person shooter gamers really hate latency spikes, e.g., a latency that is good or great most of the time but occasionally exceeds 100 ms. These latency spikes can “kill” them.
Based on the results shown in Figure 4, it may seem as if the 5G providers in the U.S. are in good shape to take on the gaming community. However, the results do not allow us to draw this conclusion. First, the charts in Figure 4 only include Speedtest samples with sufficient speed, as mentioned above. Without sufficient speed, gaming won’t be any fun, never mind the latency. Second, the Ookla Speedtest data we have used for our analysis does not allow us to conclude anything about the presence of the mentioned latency spikes. I.e., we don’t have any latency reliability results.
Watch out for these latency drivers
If you’re not interested in some of the technical details, then you can skip this section.
Once a 4G/5G provider has measured latencies in their network that are deemed to be too high, the next step is usually a root cause analysis to identify which actions could be taken to lower the latency in their network. There can be many causes for higher latency values. For example, a wireless device may first need to “wake up” from a battery savings mode before it can send and receive, or it may shortly lack connectivity while moving from one coverage area to the next. Neither of these two causes impact latency as measured by the Speedtest app, though.
In the following, I highlight three important causes for higher latency values that do impact latency as measured by many apps such as the Speedtest app and that are often not considered in discussions about latency. They can greatly skew latency measurement results and lead to drawing wrong conclusions if not considered carefully.
1. Low speed, high latency
Low up- and/or download speeds are strongly correlated with high latencies. This is shown in Figure 6, aggregating across all 4G and 5G Speedtest samples we have analyzed. The upper chart – like all charts in Figure 4 – only includes Speedtest samples with an upload speed of at least 1 Mbps and a download speed of at least 20 Mbps. The lower chart only includes Speedtest samples with an upload speed of less than 300 Kbps or a download speed of less than 5 Mbps. The results show that without sufficient speed, more than 20% of all Speedtest samples have a latency larger than 100 ms.
Does low speed cause high latency, is it the other way around, or are low speed and high latency the result of something else? It must be the latter. Latency cannot impact speed the way that speed is measured by the Speedtest app. From deep-dive studies performed in collaboration with Ericsson customers, we know that in advanced 4G networks – and the same will apply in advanced 5G networks:
- low download speed is often the result of congestion on the radio interface when many devices are active at the same time in the same coverage area, and
- low upload speed is often the result of poor coverage when a device is located too far away from the cell tower.
So, most likely, the same reasons that lead to low speed also lead to high latency. This is why we only include Speedtest samples with sufficient speed when assessing latency. We thereby avoid that radio congestion and/or poor coverage impact our results.
2. Last-hop queuing delays
In advanced 4G/5G networks, the radio interface is usually the end-to-end bottleneck. Consequently, data packets queue up in the radio base station in the downlink direction, and in the device in the uplink direction for most of the typical apps used today. In fact, data packets must queue up in these places to maximize up- and download speeds and thereby minimize time-to-content for these apps. The resulting extra queuing delays can be large. You can check this out yourself by going to fast.com on your smartphone. Figure 7 shows a measurement I’ve done on my smartphone while it was connected via 4G. The difference between “loaded latency” and “unloaded latency” is the bottleneck queuing delay of this particular measurement. Note that the latency results shown in Figures 1, 4, and 6 represent “unloaded latency”.
However – and this is all too often not understood – these last-hop queuing delays are not a problem at all as long as the user only uses one app at a time on their smartphone, which I assume is the typical case for most people. I just don’t see the typical smartphone user downloading a Netflix movie for offline viewing while playing a first-person shooter game in the foreground.
[This becomes a completely different story when 5G is used to provide Internet connectivity to entire households. I will not address this case any further in this blog post, though.]
3. Delays in the 4G/5G backhaul network
A 4G/5G network basically consists of two parts for data transfer: the radio interface and the backhaul network, as shown in Figure 8. Even in advanced 4G/5G networks, some parts of the backhaul network may contribute significantly to latency. This is why it’s important that 4G/5G providers always have an eye on the delays caused within their backhaul networks.
I have seen troubleshooting results that have uncovered ugly delays caused by under-dimensioned backhaul links, routing across an excessive number of hops, intrusion protection systems, and more. Figure 9 shows such an example comparing different 4G networks in Southeast Asia. Service provider A seems to have some serious networking issues “north” of the radio interface that have led to the high latencies in the shown time period.
So, who cares about latency in 5G?
Machines.
Machines, not humans, can benefit from the ultra-low and ultra-reliable latencies that only 5G can provide. For example, think about video-controlled high-precision robots in a smart factory. Here, we are talking about latencies below 10 ms, i.e., ultra-low, and without latency spikes exceeding 10 ms, i.e., ultra-reliable. Meeting these tough requirements is one of the key drivers behind Ericsson’s launch of a new product offering targeted at Time-Critical Communication.
Humans will hardly be able to tell – at least not appreciate much – the difference between a latency of 30 vs. 50 ms when using their everyday apps on their smartphones. This is shown in the upper chart in Figure 1. Even ambitious first-person shooter gamers would be happy with a reliable latency in the range of 30 – 50 ms. I have yet to find a sound research study that has tested humans who have been trained to react extremely fast, e.g., Olympic sprinters, race car drivers, or championship gamers. From what I have read so far, it seems that the lower limit of latency for this small fraction of the human population is not lower than 20 ms, i.e., for humans, lower latencies would not make a difference.
I conclude by repeating the take-away message of this blog post: 5G (human) user experience is determined by speed, not much by latency. But note that in this blog post, I have limited myself to things happening today. Once 5G use cases such as virtual and augmented reality become mainstream, some aspects might need to be revisited.
Read more:
Who cares about peak download speeds in 5G?
Time-Critical Communication
Dedicated networks
Footnotes:
1 Source of the three emojis: i2symbol.com
Reiner’s son
While he was still in school, this was a very common “view” I had when I came to see my son in his room – not always to my delight. 😉 Already back then, he knew very well what latency is and how it “felt”. He just called it PING times. He would usually not play – at least not in a competitive mode – while I was working in my room upstairs. The reason is that we were sharing the same fixed broadband connection and each of my larger up- or downloads of data, e.g., email attachments, caused latency spikes in his gaming.
[ad_2]
Source link