Stripping away the hype of virtualization and getting down to what really matters
A couple years later we are starting to see real implementations of virtualized network functions (VNF) and we’re all madly creating business cases to assess the true operational and capital savings. It turns out that the business case for deploying VNFs depends on a few key factors, and most NFV offers won’t pass the sniff test.
Implementing NFV? Check these first
Performance: Like all things in life, unless you can perform efficiently you won’t last long. Performance, in this case, is a critical factor in determining the actual hardware cost of running the VNF. The difference between a great VNF implementation and a good one is like the difference between LED and incandescent bulbs – great VNFs deliver the same performance using a fraction of the compute and power used by good VNFs.
Scalability: If you can’t scale then don’t even bother coming to the party. A network’s size is determined by the number of customers, services, nodes, and interconnections to other networks. The easiest way to test for scale in a VNF is to try and bring it to its knees by installing large configurations, large tables, and simulate scenarios with many customers (or sessions). A scalable VNF will deliver high performance under these loaded conditions ensuring you don’t hit log jams as you grow your network.
Resiliency: Every-so-often the sky falls. And in a cloud infrastructure, this could happen more often than usual due to many new factors and complexity of the NFV jigsaw puzzle. We know that a VNF running in the cloud is really just software running on a server. So if the server fails, the software stops running. And that could spell disaster. A great VNF implementation will support resiliency features to ensure that when something fails (and you can bet that it will) the service keeps running.
Manageability: Go with the flow. Virtualization promises the ability to automatically install, expand, shrink, and move network functions from one cloud to another (one part of the network or a server to another) via software. The idea is scintillating, but keeping track of which virtual functions are running on what hardware and quickly isolating problems – all in real time - is not an easy task. A robust set of user- and business-friendly management tools (i.e., not an arcane set of CLI screens) provides a living, breathing blueprint of the architecture which also acts on issues before they impact services. It’s an absolutely critical requirement for managing a large virtualized environment.
These four factors sound simple, almost a given, perhaps. But cloud networking is introducing a different paradigm with new, non-trivial challenges which can make the deployment of VNFs even more complex than deploying physical ones. There’s massive research and development involved in the proper design, optimization and delivery of robust and well-architected virtualization solutions. At Nokia, we’ve invested a lot and have gone through considerable effort to make sure we’re getting it right. And the ‘proof is in the pudding’ as you can see in the results of the third party tests and validation of Nokia’s IP VNFs, done recently by the EANTC, The European Advanced Networking Test Center.
The network implementation model may indeed be changing with NFV, but the need for true carrier grade performance remains paramount. And with new standards in place – for VNF performance, scalability, resiliency and manageability – NFV’s trajectory could soon be on a new course.
To learn more about bringing the benefits of NFV to your business, visit our NFV portfolio page.
Share your thoughts on this topic by replying below – or join the Twitter discussion with @nokianetworks using #NFV #telcocloud #VNF #virtualization #cloud