The success of the internet of things (IoT) and rich cloud services have helped create the need for a new approach to network architecture, in which data processing occurs at the network edge rather than completely in the cloud. Edge computing could address concerns such as latency, bandwidth costs, security and privacy.
According to a report published by Statista Research Department, the total installed base of IoT connected devices is projected to amount to 75.44 billion worldwide by 2025, a fivefold increase in ten years. The IoT, enabled by the already ubiquitous technologies, is the next major step in delivering the promise of a connected world. Not only are these devices proliferating at an exponential pace, but they are also becoming much smarter. These devices are capturing and delivering more data, making real-time decisions and interacting with our everyday lives. This fundamental change in how we experience data is based on the newest innovation of the computing revolution: edge computing.
According to Gartner, edge computing is defined as “a part of a distributed computing topology in which information processing is located close to the edge – where things and people produce or consume that information.” It is a networking technology focused on bringing computing as close to the source of data as possible. We often think about data existing in the cloud as this is where we might use technologies to process data. However, the cloud is not where data originates. Data is created by us, in the environments where we operate and the places where we work. It comes from our interactions with the equipment that we use as we perform various tasks.
Although cloud computing continues to play a significant role in modern network architecture, the exciting possibilities offered by IoT devices, which are capable of processing the data they gather closer to the point of origination, are forcing companies to rethink their approach to IT infrastructure. Edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device or an edge server. Bringing computation to the network’s edge minimizes the amount of long-distance communication that has to happen between a client and server, thereby reducing latency and bandwidth use.
This on-device approach helps reduce latency for critical applications, lowers dependence on the cloud and better manages the massive influx of data being generated. For example, think about video camera devices that send live CCTV footage to a phone or computer. While a single video camera producing data can transmit it across a network quite easily, problems begin to arise when the number of devices transmitting data at the same time grows. Instead of one video camera transmitting live footage, multiply that by hundreds and thousands of devices. This leads to poor quality footage due to latency, and large costs for bandwidth. Edge computing helps solve the problem as it reduces the amount of data being transmitted across the network.
The ability to process and store data fast is the most significant benefit of edge computing. It enables for more efficient, real-time applications that are critical for companies. Service providers and other enterprises are continually looking for a competitive advantage through better customer engagement and of course, lower cost of operations.
It’s easy to forget that data doesn’t travel instantaneously. Current commercial fiber optic technology allows data to travel as fast as two-thirds of the speed of light, moving from New York to San Francisco in about 20 milliseconds. While that sounds fast, it fails to consider the amount of data being transmitted across the network. As the number of devices increase, so does the volume of data. Therefore, traffic jams in the network are almost guaranteed.
By processing data where it originates and reducing the physical distance it must travel, edge computing can greatly reduce latency and ensure higher speeds for end users. Considering that even a single moment of latency or downtime can cost companies thousands of dollars, the efficiency advantages of edge computing cannot be overlooked.
By 2022, 82 percent of IP traffic will be video content. This traffic is expensive to transport to the cloud and process at a cloud datacenter. Service providers are, therefore, motivated to process video data at the edge to reduce jitter, improve video quality and create new revenue with value-added services such as content delivery networks, cloud gaming or video analytics without tremendous bandwidth costs.
Security and privacy can also be improved with edge computing by keeping sensitive data within the device. However, it also opens up a world of additional security headaches. With the addition of more smart devices into the mix, such as edge servers and IoT devices that have built-in computers, there are new opportunities for malwares to compromise these devices. The storing of data across multiple devices increases the attack surface. Once access is gained, a hacker could use this point of entry to infiltrate the rest of the network through one single weak-point.
On the other hand, sending data over the public internet also brings an inherent risk of it being corrupted or stolen. Processing data at the edge rather than a centralized cloud location means less data is exposed. Traditional cloud computing architecture is inherently centralized, which makes it especially vulnerable to DDoS attacks. Edge computing distributes processing, storage, and applications across a wide range of devices and datacenters, which makes it difficult for any single disruption to take down the entire network.
With that in mind, it also can offer better reliability. With edge computing devices and edge data processing and storing data close to the source, network outages in a distant location will not have an adverse effect on the process. Even in the event of a nearby datacenter outage, edge computing devices will continue to operate on their own because they manage processing functions natively.
There are several successful edge computing use cases in live environments today: autonomous vehicles, immersive technologies in retail and smart home devices, such as Amazon Alexa. For autonomous driving technologies to replace human drivers, cars must be capable of reacting to road incidents in real-time. It may take 100 milliseconds for data transmission between vehicle sensors and back-end cloud datacenters. In terms of driving decisions, this delay can have significant impact on the reaction of self-driving vehicles. For the autonomous vehicle to be successful, it needs instantaneous capabilities to detect, analyze and make a potentially life-altering decision without any delays. Therefore, data must only be stored in the computer of the car, at the network edge.
The concept of edge computing is not new. It is the sheer proliferation of devices and increase of data that has heightened the need for this approach. As more of these interconnected, intelligent devices become available, we’ve reached a crossroads with regard to where and how data is processed and accessed, hence, edge computing. It is changing how we interact with our data and is providing new business models and opportunities for those enterprises that can leverage it.