Skip to main content
Feb 12 2020

Will you be able to manage thousands of Edge Cloud infrastructures?

Edge provides a huge opportunity to host many use cases on one infrastructure, manageable from a single pane of glass

CSPs and DSPs looking to offer massive broadband and ultra-low latency services need to place performance-sensitive applications and network functions very close to the customer.  

Getting close to end-users not only allows the operator to tap directly into the new revenue streams for ultra-low latency/ultra-reliable services, but also to provide “edge-as-a-service,” and other infrastructure-as-a-service and hosting services to other enterprises. There are three main drivers for Edge Clouds:

Latency:

The physical location is defined by the desired RTT time, which is in turn limited by the speed of light.

In rough terms, the light propagation in an optical fiber allows for a 1ms RTT (Round Trip Time) and a maximum distance of 100km (cable distance). However,  because of hops and processing in the source and destination, the actual distance to achieve a 1ms RTT is about 30km give or take.

Bandwidth:

Transferring data beyond where it is produced and consumed adds extra costs for bandwidth. To decide on the Edge Cloud location, we must strike a balance between the cost of the Edge Cloud and the bandwidth needed. Also, the number of Edge sites should be limited to avoid increasing the TCO.

Privacy:

Another factor determining Edge Cloud location is the privacy and location of data within the enterprise, particularly in Industry 4.0 use cases.

Many edge cloud providers do not provide a private Edge but an extension of the public Cloud on the premises, which does not meet privacy needs.

What network providers and users have not thought of

There are also additional considerations, such as deployment and management. Deployment:

Edge cloud deployments should be fully automated with minimal human intervention, including the whole edge (hardware, cloud infrastructure software, networking).

Operations/Management:

The edge must be manageable remotely, with no need to conduct site visits for any activity.

To allow this, all components of the edge cloud stack must be programable, remotely, via well-defined APIs that expose the whole range of activities, including Lifecycle Management of the hardware and software.

What operators need to think of

Deploying services and orchestrating resources over geographically distributed, small-footprint Edge data centers requires a new level of intelligence and automation. Managing hundreds or even thousands of sites will be challenging without competent tools dedicated to managing a large number of Distributed Units (DUs). 

Centralized DCs usually host one or more separate Cloud infrastructures but this is not the case on the Edge. Numerous Edge clouds mean thousands of Cloud infrastructures all requiring management and operations on the hardware and application side, but also for virtualization aspects.

The existing ETSI MANO model provides all the information needed to manage stable sites with VIM and application manager. VIM and hardware manager will know exactly what your resources are and if you add a new data center application, the orchestrator will know the resources to use when applying the new function and scaling the Cloud.

But how for example, would you change the passwords of your 100 edge data centers? Changing them one-by-one would use a lot of resources but it can be done readily with a central hardware and VIM manager system.  

If for one edge cloud, it takes one minute to login to the system and change the password, then changing passwords for 100 edge clouds would require 100 minutes, making it very time consuming and repetitive. It would be much easier to conduct this task from a centralised management solution that can push these actions to all 100 edge clouds at once, taking a total of just one minute to complete the entire job.

So far, this issue has not been high in the thoughts of CSP/DSPs, as the thinking has been that the Cloud will take care it. This may apply currently when CSPs have only some Edge facilities here and there with MEC or other local applications. When multiple Edges become more common, they can be managed with a central, vendor independent VIM and hardware manager that will be the key to making complex networks both highly reliable and secure.

These standardized management procedures provide speed and minimize human errors, reducing OPEX.

We also need to remember that you cannot put everything in the Edge – or can you? As we distribute IP Edge functions to more locations, the TCO keeps reducing as transport savings increase.

HW power up

Share your thoughts on this topic by joining the Twitter discussion with @nokia or @nokianetworks using #cloud #telcocloud

About Ismo Matilainen

Ismo is responsible for Nokia AirFrame and Edge cloud marketing. Feel free to ask him about anything related to AirFrame data center hardware and Edge capabilities in distributed 5G networks. He is particularly interested in new business models that use the 5G Edge network to drive digitalization.

Tweet us at @nokianetworks

Article Tags