Enabling automated EVPN connectivity for data center fabrics
When I started to work with interoperability testing of Ethernet VPNs (EVPN) in 2015 at the European Advanced Network Testing Center (EANTC), there were only three vendors in attendance. We were mostly concerned with replacing Virtual Private LAN Service (VPLS) at the time. Over the last six years, I’m proud of how far we and EVPN have come. From three, we have grown to almost a dozen vendors that support the main features of EVPN. And, in the meantime, EVPN has been extended much farther than we imagined back then, so that now, it has become the Virtual Private Network (VPN) technology of choice outside and inside the data center.
Our interest in EVPN originally arose because we were looking for an efficient way to provide intra and inter subnet forwarding, along with all-active multi-homing in data centers. Later, we found that EVPN could resolve the same and many other issues in the WAN for Service Providers deploying Metro Ethernet Forum services (E-LAN, E-LINE, E-Tree, etc.).
Looking at the data center network specifically, Virtual Extensible LAN (VXLAN) as in RFC7348, has long been used as an overlay in the data center to provide L2 and L3 connectivity for application workload connectivity within leaf-spine architectures. VXLAN defines a tunnelling scheme to overlay L2 connectivity on top of L3 networks. As well as being the most widely used tunnelling protocol in data centers, it solves the scaling problem of Virtual LANs (VLAN) by expanding the address space from 4K to 16 million, making it also suitable for more complex deployments.
Without a control plane, there are issues with VXLAN, however, as it uses flood and learn to discover remote end points. EVPN solves this inefficiency by using BGP in the control plane to discover remote VTEPs (VXLAN end points). Most importantly, EVPN supports all-active multi-homing and distributes the workload MAC/IP information in an efficient and scalable way, using multi-protocol BGP and all its tools for scale that have been proven to work on the Internet for quite some time.
Moreover, EVPN has been extended to make the life of data center operators easier, with unique features to optimize workload mobility, protect the hosts, prevent accidental loops, suppress ARP and Neighbor Discovery flooding, or optimize the delivery of multicast traffic for key applications.
For inter DC connectivity (DCI), EVPN provides a scalable way to connect data centers to the WAN, with interworking to whatever underlay and overlay technology the WAN Service Provider is using. MPLS or SRv6 may be used in the WAN, as a transport for L2 and L3 services, whereas VXLAN is the transport of choice in the data center. EVPN can provide the glue to link the L2 and L3 services in the WAN and in the data center, just by following the procedures in RFC9014 or draft-ietf-bess-evpn-ipvpn-interworking.
EVPN was designed with automation in mind, and many of the basic connectivity parameters can be auto derived sparing the operator from having to configure and even understand those parameters. Examples are Ethernet Segments, Route Distinguishers or Route Targets. However, within data center large leaf-spine designs and using advanced EVPN features, configurations can still get complicated and lengthy.
EVPN is programmable using a model-based language such as YANG that allows for intent-based programming. Much of the complexity can be abstracted, enabling various intents, such as cloud resource access policies and permissions to be extended by an SDN controller to specific security zones or subnets. Our customers have indicated their preference for tools that help simplify (and automate) EVPN connectivity.
Since that initial interoperability testing in 2015, I have been part of the Nokia team advancing support for EVPN. Our approach at Nokia has been to invest in EVPN standardization and interoperability testing because we believe EVPN is the VPN technology of choice in the cloud age and has to be open.
Our SR Linux operating system has a solid EVPN and BGP stack, inherited from SR OS, which means it has been operationalized for over a decade in some of the world’s most demanding networks. We have designed it to be extremely scalable and feature-rich. We have also created our Nokia Fabric Services System – an operations and automation product which uses workload intent to automate the creation of the required EVPN overlay connectivity to support the application workloads that are hosted on the data center servers.
With the Fabric Services System, the automation provided by EVPN is taken to the next level. The Fabric Services System abstracts the data center connectivity so that the data center operator uses only intent-based constructs that are relevant to the DC applications. There is no need to know anything about EVPN specific configuration parameters — only subnets, gateways and IP addresses.
As more data center teams adopt leaf-and-spine topologies, EVPN has become the best way to make traditional applications work on a modern architecture. With the rise of edge clouds, the need for data center connectivity and interoperability across the WAN makes the case for EVPN even stronger. Once you add in automated EVPN connectivity to support application workloads, the shift to EVPN is complete. This motivation has driven the design of our network operating system, SR Linux, which when paired with the Fabric Services System, enables EVPN to lead us into the next age of software-defined, automated data center networking.
For additional information on our unique approach to data center fabric operations, please listen to the Packet Pushers Podcasts
Operationalizing EVPN For Data Center Networks With Nokia