Connectivity is something we take for granted in our daily lives, a reality that we can hardly do without but, at the same time, regarding which many of us are unaware of its true magnitude and the enormous complexity behind it.
A telecommunications network is made up of multiple units of equipment and links connecting users to a diversity of services (switches, routers, lines of communication, antennas, clients’ home equipment, etc.), involving a large amount of technology mainly in the form of specific-purpose infrastructures. A communications network, therefore, is a complex, heterogeneous, fairly rigid structure in which implementing any changes requires a lengthy period of time –even decades– during which all the agents responsible for its construction must be involved.
To date, suppliers to this industry have focused on developing communications equipment specialized in performing concrete, non-interchangeable functions with specifically designed hardware, which has favored a model in which the technology provided by different suppliers is exclusive to them, allowing little or no interconnectivity with that of other suppliers. If to this we add the unceasing advances in technology, we find that the necessary investments to maintain and modernize a network are very high, and therefore any technological change needs to be thoroughly justified and will be implemented gradually. As a consequence, ultimately a large number of different technologies inevitably coexist in the networks. Furthermore, all this complexity hampers network operation enormously making it inflexible and unable to adapt to changes unless foreseen well in advance, and in many cases the network itself acts as a hindrance to innovation in services to its end clients.
This scenario has led telecommunications operators to strive to transform the network into a far more efficient infrastructure, smarter and more flexible and malleable, allowing them to compete in an increasingly digital world with agility and effectiveness. To reach this objective they are envisaging changes to the network architecture, in the infrastructure supporting the equipment, in infrastructure centralization-decentralization, etc. The overriding aim of these transformations is to achieve malleable infrastructures, surpassing the limitations imposed by the current physical infrastructure. The key lies in making the network more “software based” in which virtualizing the network is one of the main ingredients.
What does virtualizing the network mean?
On the one hand, separating the software from the hardware, making network functions as software-based as possible, and transferable to general-purpose hardware units (e.g. high-end servers). This will make it possible for infrastructures to be used for different purposes, reducing costs and facilitating the implementation of changes to the network. This is known as Network Functions Virtualization (NFV). The principal challenge in this field is to obtain levels of performance from general-purpose equipment similar to those yielded on specific hardware.
On the other hand, it means automating the coordination and management of this infrastructure, making it as independent as possible from said infrastructure, through the use of standard interfaces and software that allow full external control of the infrastructure, thereby obtaining a simpler, more coherent network behavior. This is known as Software Defined Networking.
In the following video a graphic explanation of all these concepts is given:
And what are operators doing?
All telecommunications operators are aware of the turning point they are currently facing and of the need to innovate, both in their network models and architecture and the way telecommunications infrastructures are operated, to guarantee their sustainability. And, needless to say, they are working toward this goal through proofs of concept and pilots, and through standardization groups, of which perhaps the most significant is ETSI NFV ISG (Network Functions Virtualisation Industry Standardization Group). This group was created in early 2013 with Telefónica as one of its founding members.
The objective now is to bring this technology to networks as soon as possible. Telefónica is working on this concept at its NFV Reference Laboratory, assisting the ecosystem of network solution suppliers in testing anddeveloping virtualized network functions jointly with the higher echelons of management and orchestration. Our goal is to promote interoperability and to provide a more open ecosystem making it easier for telecommunications services providers to adapt and enlarge their range of services.
Undoubtedly, moving toward an NFV model is a long-term process. Just as the data packet switching networks began to evolve in the 1990s and are still with us (until we fully extinguish the PSTN legacy our networks will not be exclusively IP), so will the development of NFV technology, which began in 2012, gradually reach the different segments that make up the network.
The race for network virtualization is a long-distance affair, not a sprint; and it will lead over many different arenas. For the time being, the entire ecosystem of suppliers has adopted this new paradigm, and has set to work. The diverse standardization groups are working on the implications of NFV in their fields of operation. Manufacturers are presenting their approximations and performing proofs of concept. Operators are demanding an open, multi-supplier offer that adapts to their needs. Simultaneously, new agents in the realm of IT are approaching the network world with expertise from the cloud environment. New players and start-ups see an opportunity to form part of a new value proposition. Meanwhile, virtualization follows the natural process for any technological maturation cycle.
We are still in the early stages. At present, it has been proved technically possible to virtualize many network functions and a large number of operators and suppliers are presenting their results and lessons learned. The current situation, beyond proofs of concept and pilots, offers examples of operators that have already added general-purpose equipment to their network and incorporated this to their business operation.
Clearly, much work is still needed for this technology to develop fully, and for us to learn how to reap the maximum benefits from it. The challenges ahead are many, but the achievements so far have also been outstanding. This is the usual path for any innovation that is to reach the market and be successful: a path of commitment, effort and confidence in achieving the targeted results. There is no other way; there are no shortcuts.
Antonio José Elizondo Armengol
Telecommunications engineer, economist, author and lecturer. He has worked for Telefónica I+D, in Telefónica Global Technology Management, responsible for the strategy and technology for Network Virtualization. He has written over 20 scientific papers.
Blog Think Big – Artículos de Antonio José Elizondo Armengol
- Madri+d Notiweb: Virtualización de red: construyendo la red del futuro
- AlphaGalileo news service: