Paper accepted at DFRWS EU


The paper “On Efficiency of Artifact Lookup Strategies in Digital Forensics” was accepted as a full paper at the Digital Forensics Research Workshop Europe 2019. The paper will be presented in Oslo, Norway in April (Conference dates are 24th to 26th of April).

Thank you to Lorenz Liebler (a,b), Patrick Schmitt (c) and Harald Baier (a,b) from 

a* da/sec Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, Germany
b* CRISP, Center for Research in Security and Privacy, Darmstadt, Germany
c* Secure Software Engineering Group, Technische Universität Darmstadt, Darmstadt, Germany


In recent years different strategies have been proposed to handle the problem of ever-growing digital forensic databases. One concept to deal with this data overload is data reduction, which essentially means to separate the wheat from the chaff, e.g., to filter in forensically relevant data. A prominent technique in the context of data reduction are hash-based solutions. Data reduction is achieved because hash values (of possibly large data input) are much smaller than the original input. Today's approaches of storing hash-based data fragments reach from large scale multithreaded databases to simple Bloom filter representations. One main focus was put on the field of approximate matching, where sorting is a problem due to the fuzzy nature of the approximate hashes. A crucial step during digital forensic analysis is to achieve fast query times during lookup (e.g., against a blacklist), especially in the scope of small or ordinary resource availability. However, a comparison of different database and lookup approaches is considerably hard, as most techniques partially differ in considered use-case and integrated features, respectively. In this work we discuss, reassess and extend three widespread lookup strategies suitable for storing hash-based fragments: (1) Hashdatabase for hash-based carving (hashdb), (2) hierarchical Bloom filter trees (hbft) and (3) flat hash maps (fhmap). We outline the capabilities of the different approaches, integrate new extensions, discuss possible features and perform a detailed evaluation with a special focus on runtime efficiency. Our results reveal major advantages for fhmap in case of runtime performance and applicability. hbft showed a comparable runtime efficiency in case of lookups, but hbft suffers from pitfalls with respect to extensibility and maintenance. Finally, hashdb performs worst in case of a single core environment in all evaluation scenarios. However, hashdb is the only candidate which offers full parallelization capabilities, transactional features, and a Single-level storage.