Home > Noticias > 2014 > Trade-offs in Optimizing the Cache Deployments of CDNs
Trade-offs in Optimizing the Cache Deployments of CDNs
Fuente(s): 
IMDEA Networks Institute

Sergey Gorinsky, Research Associate Professor en IMDEA Networks, y Syed Hasan, Estudiante de doctorado en la Universidad Carlos III de Madrid (UC3M), han publicado el artículo "Trade-offs in Optimizing the Cache Deployments of CDNs" en IEEE INFOCOM 2014 (33rd Annual IEEE International Conference on Computer Communications), un congreso puntero en la investigación sobre redes informáticas. Este trabajo se ha realizado en coautoría con el Prof. Constantine Dovrolis, de la Escuela de Informática de Georgia Tech (EE.UU.), y el Prof. Ramesh Sitaraman, quién está afiliado a University of Massachusetts at Amherst y a Akamai Technologies (EE.UU.). INFOCOM 2014 se celebrará en Toronto, Canadá, desde el 27 de abril al 2 de mayo de 2014.

Abstract:

Content delivery networks (CDNs) deploy globally distributed systems of caches in a large number of autonomous systems (ASes). It is important for a CDN operator to satisfy the performance requirements of end users, while minimizing the cache deployment cost. In this paper, we study the cache deployment optimization (CaDeOp) problem of determining how much server, energy, and bandwidth resources to provision in each cache AS, i.e., each AS chosen for cache deployment. The CaDeOp objective is to minimize the total cost incurred by the CDN, subject to meeting the end-user performance requirements. We formulate the CaDeOp problem as a mixed integer program (MIP) and solve it for realistic AS-level topologies, traffic demands, and non-linear energy and bandwidth costs. We also evaluate the sensitivity of the results to our parametric assumptions. When the end-user performance requirements become more stringent, the CDN footprint rapidly expands, requiring cache deployments in additional ASes and geographical regions. Also, the CDN cost increases several times, with the cost balance shifting toward bandwidth and energy costs. On the other hand, the traffic distribution among the cache ASes stays relatively even, with the top 20% of the cache ASes serving around 30% of the overall traffic.