For both lifelike virtual reality experiences and safe self-driving autonomous vehicles, there is one requirement that trumps all others: you need a network that can respond almost in real time. And the networks of tomorrow will do precisely that.
Latency, or response time, describes the time period between the occurrence of an event and the appearance of a visible reaction to it. In telecommunications, latency limits are governed by the laws of physics – as a function of the length of the pathway that the data need to travel through the networks in a certain period of time. What does this mean for users? Let's take the example of mobile virtual reality experiences (VR). Starting from around 14 to 16 images per second (the exact number is personal to each individual), the human brain processes successive pictures as moving images, but the result isn't always smooth and judder-free. That's why motion pictures use a standard of around 24 pictures/second. VR users don't want the virtual world transmitted through their glasses to be jerky and nauseating. Which means that the network carrying the data must be able to handle the necessary number of pictures per second and with the maximum possible consistency in latency. The shorter the delay, the more realistic the user's experience of the virtual worlds is likely to feel.
Another example to give us a glimpse into the future is autonomous driving technology, which mobile data networks have the potential to make safer. Take, for example, the information that a moving vehicle has just braked around the next corner or just over the crest of a hill in front of an autonomous vehicle. That data will need to flow through the networks and be processed and delivered, all at lightning speed. The vehicle behind it will be able to respond appropriately only if it receives the information in time.
A substantial factor in achieving low latency is thus proximity. This means avoiding the need to send data to central server farms on the other side of the globe before a response is sent back to the source – far too late. This is where the laws of physics begin to impose their limits. Using optical fibers, data rushes through the network at two-thirds the speed of light – 200 kilometers per millisecond. So for applications that need a latency time of about one millisecond, the data center can theoretically be located no more than 100 kilometers away. This calculation doesn't include delays due to network components along the route and to data processing.
The key to low latency is therefore to have processing power not hundreds of miles away but located as close to the user as possible. Directly at or close to the mobile data network's local base station, for example. In order to bring data processing and applications closer to the customer in the future and to live up to the future requirements of the network, Deutsche Telekom is now working on what it has called the Edge Cloud. The capacities of this technology allow a wide range of local services to be provided aside from processing power. The closer the "brain" is, the faster the reaction time. That reaction time may need to be as little as a millisecond for certain applications, such as connected car technology, for example.