Loading…

Algorithms for Optimal Replica Placement Under Correlated Failure in Hierarchical Failure Domains

In data centers, data replication is the primary method used to ensure availability of customer data. To avoid correlated failure, cloud storage infrastructure providers model hierarchical failure domains using a tree, and avoid placing a large number of data replicas within the same failure domain...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2017-04
Main Authors: Mills, K Alex, Chandrasekaran, R, Mittal, Neeraj
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In data centers, data replication is the primary method used to ensure availability of customer data. To avoid correlated failure, cloud storage infrastructure providers model hierarchical failure domains using a tree, and avoid placing a large number of data replicas within the same failure domain (i.e. on the same branch of the tree). Typical best practices ensure that replicas are distributed across failure domains, but relatively little is known concerning optimization algorithms for distributing data replicas. Using a hierarchical model, we answer how to distribute replicas across failure domains optimally. We formulate a novel optimization problem for replica placement in data centers. As part of our problem, we formalize and explain a new criterion for optimizing a replica placement. Our overall goal is to choose placements in which correlated failures disable as few replicas as possible. We provide two optimization algorithms for dependency models represented by trees. We first present an \(O(n + \rho \log \rho)\) time dynamic programming algorithm for placing \(\rho\) replicas of a single file on the leaves (representing servers) of a tree with \(n\) vertices. We next consider the problem of placing replicas of \(m\) blocks of data, where each block may have different replication factors. For this problem, we give an exact algorithm which runs in polynomial time when the skew, the difference in the number of replicas between the largest and smallest blocks of data, is constant.
ISSN:2331-8422