Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano. However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where only cells in memory or only cells on disk are corruptible and separate randomized and deterministic dictionaries in the latter.
Algorithms and Data Structures Symposium, WADS - Banff, Canada Duration: 21 Aug 2009 → 23 Aug 2009 Conference number: 11
Conference
Conference
Algorithms and Data Structures Symposium, WADS
Number
11
Country
Canada
City
Banff
Period
21/08/2009 → 23/08/2009
Bibliographical note
Title of the vol.: Algorithms and Data Structures. Proceedings. / Frank Dehne, Marina Gavrilova, Jörg-Rüdiger Sack and Csaba D. Tóth (eds.) ISBN: 978-3-642-03366-7