Master node fault-tolerance is the topic that is often dimmed in the discussion of big data processing technologies. Although failure of a master node can take down the whole data processing pipeline, this is considered either improbable or too difficult to encounter. The aim of the studies reported here is to propose rather simple technique to deal with master-node failures. This technique is based on temporary delegation of master role to one of the slave nodes and transferring updated state back to the master when one step of computation is complete. That way the state is duplicated and computation can proceed to the next step regardless of a failure of a delegate or the master (but not both). We run benchmarks to show that a failure of a master is almost “invisible” to other nodes, and failure of a delegate results in recomputation of only one step of data processing pipeline. We believe that the technique can be used not only in Big Data processing but in other types of applications.
Original languageEnglish
Title of host publicationComputational Science and Its Applications – ICCSA 2016
Subtitle of host publication16th International Conference, Beijing, China, July 4-7, 2016, Proceedings, Part II
PublisherSpringer Nature
ISBN (Electronic)978-3-319-42108-7
ISBN (Print)978-3-319-42107-0
Publication statusPublished - 2016
EventInternational Conference on Computational Science and Its Applications - Beijing
Duration: 4 Jul 20166 Jul 2016
Conference number: 16

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Nature
ISSN (Print)0302-9743


ConferenceInternational Conference on Computational Science and Its Applications
Abbreviated titleICCSA 2016

Fingerprint Dive into the research topics of 'Factory: Master node high-availability for Big Data applications and beyond'. Together they form a unique fingerprint.

Cite this