In real time over the network: low latency is the key

For both lifelike virtual reality experiences and safe self-driving autonomous vehicles, there is one requirement that trumps all others: you need a network that can respond in real time to every situation. And tomorrow's network will do precisely that.

Latency – or reaction time – describes the time period between the occurrence of an event and the appearance of a visible reaction to it. In telecommunications, latency limits are governed by the laws of physics – as a function of the length of the pathway that the data need to travel through the networks.

What does that feel like? Let's take the example of virtual reality experiences (VR). Anyone experiencing a virtual world while wearing VR goggles will need to receive something back from a remote server: namely an image that corresponds to the virtual explorer's expectations. The rule of thumb is quite simple: the shorter the delay, the more realistic the user's experience of the virtual worlds is likely to feel. The same goes for online gaming.

For another example to give us a glimpse further into the future is autonomous driving technology, which mobile data networks have the potential to make safer. Take, for example, the information that a moving vehicle has just braked around the next corner or just over the crest of a hill in front of an autonomous vehicle. That data will need to flow through the networks and be processed and delivered, all at lightning speed. The vehicle behind it will be able to respond appropriately only if it receives the information in time.

In less than the blink of an eye

Originally, mobile data networks using LTE (Long Term Evolution) are at best able to respond with a latency of up to 40 milliseconds. While that's less than the blink of an eye, it's still more than the reaction time of many talented and well-trained professional sportspeople. Such people can cut their reaction times down to seven milliseconds.

But for the technology of the future, even that's much to long. For smart vehicles, latencies just cannot be short enough to be able to recognize potential hazards before humans become aware of them. In such situations, data needs to be transmitted reliably and with extremely short delay (low latency) over the mobile data network. Connections that can do that are said to be capable of what's called "real-time communication".

Autonomous cars and ever more sophisticated VR provide only two examples of the many future applications of such technology. They all demonstrate that the network of the future on the basis of the coming 5G communication standard will offer more than just more bandwidth – i.e. large quantities of data that flow through the cables more quickly. Its quality will be defined by its reliability and its ability to seamlessly switch from fixed to mobile networks, thus allowing it to make the best possible connection automatically at all times. And, since it can respond quickly whenever required to do so, it thus has the necessary "low latency". And what applies to mobile data networks is every bit as important for fixed line applications too. Added to this is the fact that demand is increasing for latencies that are very free of fluctuations. An example of such demand is from future telemedicine, from surgeons conducting remote operations.

Processing power needs to move closer to the consumer

A substantial factor in achieving low latency is proximity. This means avoiding the need to send data to central server farms on the other side of the globe before a response is sent back to the source. This is where the laws of physics begin to impose the limits. Using optical fibers, data rushes through the network at three-quarters the speed of light – 200 kilometers per millisecond. So for applications that need a latency time of about one millisecond, the data center can theoretically be located no more than 100 kilometers away. This calculation doesn't include delays due to network components along the route and to data processing.

The key to low latency is therefore to have processing power not hundreds of miles away but located as close to the user as possible. Directly at or close to the mobile data network's local base station, for example. The closer it is, the faster the reaction time. That reaction time may need to be as little as a millisecond for certain applications, such as connected car technology, for example.

The future is already in the starting blocks

One thing is clear: a latency or "transmission delay" of less than 20 milliseconds has already been successfully piloted for the A9 autobahn between Munich and Nuremberg in 2015. Deutsche Telekom, Continental, the Fraunhofer Institute ESK and Nokia have taken advantage of the "mobile edge computing" principle in this pilot project, combining it with precise geolocation technology. The data is sent only to the nearest mobile base station for processing, and no longer any further into the network. Such shorter transmission paths are already facilitating new safety functions for connected car technology.

In order to bring data processing and applications closer to the customer in the future and to live up to the future requirements of the network, Deutsche Telekom is now working on what it has called the "Edge Cloud". The capacities of this technology allow a wide range of local services to be provided aside from processing power. That makes it very complex, but absolutely necessary.

It "pings"

In tests conducted at the beginning of 2016, Deutsche Telekom proved that reaction times can indeed be reduced to less than a millisecond, setting a world record. Low latency is the key to a wide range of successful applications in the future. In robotics and Industry 4.0, for example, in situations where one needs to control a robot remotely in real time to make it do particularly precise and sensitive things, or when one wants it to move components autonomously through a factory environment for further processing. It's clear then that the future is already in the starting blocks. And Deutsche Telekom is preparing for that future, researching future applications in the areas of virtual and augmented reality, as well as robotics and related fields in one of its projects.

By the way, another, more onomatepoeic technical term for "reaction time", "transmission time"and latency, is the simple word "ping". The term has its origins in military technology in which sound waves were used to locate submarines, counting the time they took to bounce back off their hard hulls. The word comes from the sound made by the waves as they returned. And you can use the well-known diagnostic tool of the same name from any computer to see how long it takes for a data package to be sent to a recipient and back again. "Ping" time has an important role too in popular online games. If an action you take can be sent on quickly to the game or game server and its consequences can be shown on screen immediately, you'll be likely to be able to play that bit better.