Aarhus University Seal

Fault Tolerant External Memory Algorithms

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperConference articleResearchpeer-review

  • Department of Computer Science
Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano. However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where only cells in memory or only cells on disk are corruptible and separate randomized and deterministic dictionaries in the latter.
Original languageEnglish
Book seriesLecture Notes in Computer Science
Pages (from-to)411-422
Number of pages12
Publication statusPublished - 2009
EventAlgorithms and Data Structures Symposium, WADS - Banff, Canada
Duration: 21 Aug 200923 Aug 2009
Conference number: 11


ConferenceAlgorithms and Data Structures Symposium, WADS

Bibliographical note

Title of the vol.: Algorithms and Data Structures. Proceedings. / Frank Dehne, Marina Gavrilova, Jörg-Rüdiger Sack and Csaba D. Tóth (eds.)
ISBN: 978-3-642-03366-7

See relations at Aarhus University Citationformats

ID: 17174367