Skip to main content
Aug 01 2019

Microservice disaggregation, performance and the role of automation

The telecommunications community continues to focus on transitioning from virtualization to cloud-ready and ultimately cloud-native for its core network. A cloud environment is advantageous for auto-scaling and dynamic workload placement to deliver today’s 4G services and applications and will be required as we support the expanded business opportunities/service opportunities promised by 5G.

The industry is discussing and investigating the inclusion of further cloud and webscale technologies such as micro-services and container deployments for the core network.  Continued software disaggregation, including appropriate use of concepts such as microservices, is intended to further increase feature velocity and de-layer network function architecture with the goal of producing further simplified, reusable and dynamically scalable components.

However, there is a need to balance telco-quality performance and reliability with the potential flexibility and agility of leveraging microservices and containers within the core. It is critical to achieve the same resilience and availability metrics and support key operational features such as measurability, manageability, scalability and programmability.

In terms of container technology, there are some capabilities that may facilitate increased feature velocity and automated updates to further expand an interlinked development and operations (DevOps) way of working. The implementation of containers can be achieved in a number of different ways, including containers within virtual machines (VMs) or within bare metal container environments that utilize the open-source Kubernetes framework.

There are ongoing discussions about the virtues of VMs, containers within VMs and bare-metal container deployments. There seems to be market consensus that containers are more lightweight and may have increased life cycle management capabilities, but may also be seen as somewhat less secure than VMs.

Hybrid architecture approaches have been advocated where containers are run inside VMs to leverage their greater security, which is well suited to mainstream cloud stack deployments such as OpenStack and VM-based NFVI platforms. Conversely, VMs can be run as containers to provide security from the inside-out, which can enable a fully containerized solution when native containerization may not be available. Another possible direction is towards ultra-lightweight VMs, which have shown enhanced performance compared to containers.

The potential simplifications made possible by delayering and disaggregation, however, come at the cost of creeping complexity in other aspects of operations and can cause unintended breakdowns in the end-to-end process. For instance, a cloud-native telecommunications infrastructure operating a continuous integration and continuous delivery (CI/CD) deployment model (in which new components may be rapidly changed and introduced) might deal with the failure of some of these components as a lack of performance or functional degradation rather than total loss of service. However, it could also increase operational complexity if the network functions are too finely disaggregated.

The sheer number and frequency of updates makes the traditional method of collecting alarms and analysis by a human operator difficult, if not impossible, under a CI/CD and DevOps regime. The use of machine learning (ML) and augmented intelligence (AI) will become more important to take actions or provide insights, for example, to identify end-to-end functional issues from internal metrics that may be provided. With the appropriate insight and intent-based systems used to create actionable outcomes, the management systems could automate the immediate rollback of newly delivered software elements that would likely be unachievable by a human operator.

To fully realize the socioeconomic possibilities of 5G — and for the increasingly  diverse service requirements on 4G cores — we need to move to a cloud-native core. Within this new cloud-native core architecture we have the further opportunity, where appropriate, to apply a range of cloud/webscale concepts and technologies such as microservices and containers. 

The telco environment is complex and there are some challenging issues. We need to ensure that we fully understand the implications for the various core network functions, not assume one model fits all.  If we are to maintain reliability, availability, security and telco-grade operations, we need to continue to deliver key cloud-native architectural features while carefully evaluating the move to micro-services and containers.

Want to know more?

Take a look at our Cloud Packet Core E-book to see how our cloud-native design will drive new economic value.

Visit our Cloud Packet Core solutions page to see how our cloud-native design helps you to profitably and cost-effectively evolve.

Share your thoughts on this topic by joining the Twitter discussion with @nokianetworks or @nokia using #5G #cloud #CloudNative #CSP

About Rob McManus

Rob leads the product marketing of Nokia’s Cloud Packet Core. If you need convincing about the exciting possibilities of a cloud-native and converged packet core – talk to him or keep an eye out at key industry forums where he’s a regular speaker.

Tweet us at @nokianetworks