MR-1S1 [Torres] & MR-1S3 [Quevedo], IMDEA Networks Institute, Avda. del Mar Mediterráneo 22, 28918 Leganés – Madrid
Network Function Virtualization (NFV), coupled with Software Defined Networking (SDN), promises to revolutionize networking. Operators can create, update or scale out/in (virtualized) network functions (vNFs) on demand, construct a sequence of vNFs to form a service function chain (SFC) and steer traffic through it to meet various policy and service requirements. Virtualization of network functions (NFs) that are traditionally performed by specialized hardware boxes introduces many new challenges such as slower packet processing and higher likelihood of software failures. Scale-out is a major (software) technique afforded by NFV for circumventing the performance challenge: by distributing NFs across multiple servers, it is possible to significantly increase the overall system throughput and reduce SFC processing latency. Unfortunately, as most of vNFs of interest is stateful, this poses many challenges in automatically and elastically scaling of NFV across multiple servers while ensuring the correctness of SFC processing.
In this talk, we discuss the challenges in scaling SFC processing and present a novel distributed parallelization framework, dubbed HydraNF, for accelerating NFV service function chain processing at scale. HydraNF is designed to simultaneously tackle the performance and auto-scaling challenges in real-world large scale deployment of NFV by taking full advantage of a cluster of multi-core servers for dynamic and elastic scale-out. Leveraging the software nature of vNFs, HydraNF carefully analyzes the configuration policies, operational rules and state variables of vNFs to identify both opportunities and constraints for parallel and distributed SFC packet processing. It automatically scales out SFC processing through distribution across multiple servers and parallelizes the NFV packet processing pipelines within each server by utilizing multiple cores: this is done by exploiting parallelism at both the network function level and traffic level. We discuss the initial design, key components, a prototype implementation and preliminary evaluation of HydraNF, HydraNF is practical in that it requires no modifications to existing NFs for incremental deployment. Our experiments show that HydraNF reduces latency up to 51% with 7% CPU overhead, and 1.42 -1.87 x improvement in overall system throughput.
About Zhi-Li Zhang
Zhi-Li Zhang received Ph.D. degrees in computer science from the University of Massachusetts. He joined the faculty of the Department of Computer Science and Engineering at the University of Minnesota in 1997, where he is currently the McKnight Distinguished University Professor and Qwest Chair Professor in Telecommunications. He currently also serves as the Associate Director for Research at the Digital Technology Center, University of Minnesota. Prof. Zhang's research interests lie broadly in computer and communication networks, Internet technology, multimedia systems and content distribution networks, cyber-physical systems and Internet-of-Things, and (applied) machine learning and data mining. Prof. Zhang has published more than 100 journal and conference/workshop papers, many of them in top venues in networking and related fields. He is co-recipient of several Best Papers awards including IEEE INFOCOM, ICNP and ACM SIGMETRICS. He is a Fellow of IEEE.
This event will be conducted in English