Home > News > 2014 > Trade-offs in Optimizing the Cache Deployments of CDNs
Trade-offs in Optimizing the Cache Deployments of CDNs
Source(s): 
IMDEA Networks Institute

Sergey Gorinsky, a research associate professor at Institute IMDEA Networks, and Syed Hasan, a PhD Student at Carlos III University of Madrid (UC3M), published article "Trade-offs in Optimizing the Cache Deployments of CDNs" at IEEE INFOCOM 2014 (33rd Annual IEEE International Conference on Computer Communications), a top conference in computer networking research. The work is co-authored by Prof. Constantine Dovrolis, from the School of Computer Science at Georgia Tech (USA), and Prof. Ramesh Sitaraman, who is affiliated with the University of Massachusetts at Amherst and Akamai Technologies (USA). INFOCOM 2014 will meet in Toronto, Canada from April 27th to May 2nd, 2014.

Abstract:

Content delivery networks (CDNs) deploy globally distributed systems of caches in a large number of autonomous systems (ASes). It is important for a CDN operator to satisfy the performance requirements of end users, while minimizing the cache deployment cost. In this paper, we study the cache deployment optimization (CaDeOp) problem of determining how much server, energy, and bandwidth resources to provision in each cache AS, i.e., each AS chosen for cache deployment. The CaDeOp objective is to minimize the total cost incurred by the CDN, subject to meeting the end-user performance requirements. We formulate the CaDeOp problem as a mixed integer program (MIP) and solve it for realistic AS-level topologies, traffic demands, and non-linear energy and bandwidth costs. We also evaluate the sensitivity of the results to our parametric assumptions. When the end-user performance requirements become more stringent, the CDN footprint rapidly expands, requiring cache deployments in additional ASes and geographical regions. Also, the CDN cost increases several times, with the cost balance shifting toward bandwidth and energy costs. On the other hand, the traffic distribution among the cache ASes stays relatively even, with the top 20% of the cache ASes serving around 30% of the overall traffic.