Lossless Compression of Time Series Data with Generalized Deduplication

Research output: Contribution to journal/Conference contribution in journal/Contribution to newspaperJournal articleResearchpeer-review

Documents

To provide compressed storage for large amounts of time series data, we present a new strategy for data deduplication. Rather than attempting to deduplicate entire data chunks, we employ a generalized approach, where each chunk is split into a part worth deduplicating and a part that must be stored directly. This simple principle enables a greater compression of the often similar, non-identical, chunks of time series data than is the case for classic deduplication, while keeping benefits such as scalability, robustness, and on-the-fly storage, retrieval, and search for chunks. We analyze the method's theoretical performance, and argue that our method can asymptotically approach the entropy limit for some data configurations. To validate the method's practical merits, we finally show that it is competitive when compared to popular universal compression algorithms on the MIT-BIH ECG Compression Test Database.
Original languageEnglish
JournalGlobecom. I E E E Conference and Exhibition
ISSN1930-529X
Publication statusAccepted/In press - 2019

See relations at Aarhus University Citationformats

Projects

Download statistics

No data available

ID: 159860403