1. | Martín-Pérez, Miguel; Rodríguez, Ricardo J; Breitinger, Frank: Bringing order to approximate matching: Classification and attacks on similarity digest algorithms. In: Forensic Science International: Digital Investigation, pp. 301120, 2021, ISSN: 2666-2817. (Type: Journal Article | Abstract | Links | BibTeX) @article{MARTINPEREZ2021301120, title = {Bringing order to approximate matching: Classification and attacks on similarity digest algorithms}, author = {Miguel Martín-Pérez and Ricardo J Rodríguez and Frank Breitinger}, url = {https://www.sciencedirect.com/science/article/pii/S2666281721000172}, doi = {https://doi.org/10.1016/j.fsidi.2021.301120}, issn = {2666-2817}, year = {2021}, date = {2021-01-01}, journal = {Forensic Science International: Digital Investigation}, pages = {301120}, abstract = {Fuzzy hashing or similarity hashing (a.k.a. bytewise approximate matching) converts digital artifacts into an intermediate representation to allow an efficient (fast) identification of similar objects, e.g., for blacklisting. They gained a lot of popularity over the past decade with new algorithms being developed and released to the digital forensics community. When releasing algorithms (e.g., as part of a scientific article), they are frequently compared with other algorithms to outline the benefits and sometimes also the weaknesses of the proposed approach. However, given the wide variety of algorithms and approaches, it is impossible to provide direct comparisons with all existing algorithms. In this paper, we present the first classification of approximate matching algorithms which allows an easier description and comparisons. Therefore, we first reviewed existing literature to understand the techniques various algorithms use and to familiarize ourselves with the common terminology. Our findings allowed us to develop a categorization relying heavily on the terminology proposed by NIST SP 800-168. In addition to the categorization, this article presents an abstract set of attacks against algorithms and why they are feasible. Lastly, we detail the characteristics needed to build robust algorithms to prevent attacks. We believe that this article helps newcomers, practitioners, and experts alike to better compare algorithms, understand their potential, as well as characteristics and implications they may have on forensic investigations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fuzzy hashing or similarity hashing (a.k.a. bytewise approximate matching) converts digital artifacts into an intermediate representation to allow an efficient (fast) identification of similar objects, e.g., for blacklisting. They gained a lot of popularity over the past decade with new algorithms being developed and released to the digital forensics community. When releasing algorithms (e.g., as part of a scientific article), they are frequently compared with other algorithms to outline the benefits and sometimes also the weaknesses of the proposed approach. However, given the wide variety of algorithms and approaches, it is impossible to provide direct comparisons with all existing algorithms. In this paper, we present the first classification of approximate matching algorithms which allows an easier description and comparisons. Therefore, we first reviewed existing literature to understand the techniques various algorithms use and to familiarize ourselves with the common terminology. Our findings allowed us to develop a categorization relying heavily on the terminology proposed by NIST SP 800-168. In addition to the categorization, this article presents an abstract set of attacks against algorithms and why they are feasible. Lastly, we detail the characteristics needed to build robust algorithms to prevent attacks. We believe that this article helps newcomers, practitioners, and experts alike to better compare algorithms, understand their potential, as well as characteristics and implications they may have on forensic investigations. |
2. | Pluskal, Jan; Breitinger, Frank; Ryšavý, Ondřej: Netfox detective: A novel open-source network forensics analysis tool. In: Forensic Science International: Digital Investigation, 35 , pp. 301019, 2020, ISSN: 2666-2817. (Type: Journal Article | Abstract | Links | BibTeX) @article{pluskal2020netfox, title = {Netfox detective: A novel open-source network forensics analysis tool}, author = {Jan Pluskal and Frank Breitinger and Ondřej Ryšavý}, url = {http://www.sciencedirect.com/science/article/pii/S2666281720300871}, doi = {10.1016/j.fsidi.2020.301019}, issn = {2666-2817}, year = {2020}, date = {2020-09-17}, journal = {Forensic Science International: Digital Investigation}, volume = {35}, pages = {301019}, abstract = {Network forensics is a major sub-discipline of digital forensics which becomes more and more important in an age where everything is connected. In order to cope with the amounts of data and other challenges within networks, practitioners require powerful tools that support them. In this paper, we highlight a novel open-source network forensic tool named -- Netfox Detective -- that outperforms existing tools such as Wireshark or NetworkMiner in certain areas. For instance, it provides a heuristically based engine for traffic processing that can be easily extended. Using robust parsers (we are not solely relying on the RFC description but use heuristics), our application tolerates malformed or missing conversation segments. Besides outlining the tool's architecture and basic processing concepts, we also explain how it can be extended. Lastly, a comparison with other similar tools is presented as well as a real-world scenario is discussed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Network forensics is a major sub-discipline of digital forensics which becomes more and more important in an age where everything is connected. In order to cope with the amounts of data and other challenges within networks, practitioners require powerful tools that support them. In this paper, we highlight a novel open-source network forensic tool named -- Netfox Detective -- that outperforms existing tools such as Wireshark or NetworkMiner in certain areas. For instance, it provides a heuristically based engine for traffic processing that can be easily extended. Using robust parsers (we are not solely relying on the RFC description but use heuristics), our application tolerates malformed or missing conversation segments. Besides outlining the tool's architecture and basic processing concepts, we also explain how it can be extended. Lastly, a comparison with other similar tools is presented as well as a real-world scenario is discussed. |
3. | Breitinger, Frank; Tully-Doyle, Ryan; Przyborski, Kristen; Beck, Lauren; Harichandran, Ronald S: First year students' experience in a Cyber World course -- an evaluation. In: Education and Information Technologies, 2020, ISBN: 1573-7608. (Type: Journal Article | Abstract | Links | BibTeX) @article{Breitinger2020, title = {First year students' experience in a Cyber World course -- an evaluation}, author = {Frank Breitinger and Ryan Tully-Doyle and Kristen Przyborski and Lauren Beck and Ronald S Harichandran}, url = {https://doi.org/10.1007/s10639-020-10274-5}, doi = {10.1007/s10639-020-10274-5}, isbn = {1573-7608}, year = {2020}, date = {2020-09-11}, journal = {Education and Information Technologies}, abstract = {Although cybersecurity is a major present concern, it is not a required subject in University. In response, we developed Cyber World which introduces students to eight highly important cybersecurity topics (primarily taught by none cybersecurity experts). We embedded it into our critical thinking Common Course (core curriculum) which is a team-taught first-year experience required for all students. Cyber World was first taught in Fall 2018 to a cohort of over 150 students from various majors at the University of New Haven. This article presents the evaluation of our Fall taught course. In detail, we compare the performance of Cyber World students to other Common Course sections that ran in parallel and conclude that despite the higher workload students performed equally well. Furthermore, we assess the students' development throughout the course with respect to their cybersecurity knowledge where our results indicate a significant gain of knowledge. Note, this article also presents the idea and topics of Cyber World; however a detailed explanation has been released previously.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Although cybersecurity is a major present concern, it is not a required subject in University. In response, we developed Cyber World which introduces students to eight highly important cybersecurity topics (primarily taught by none cybersecurity experts). We embedded it into our critical thinking Common Course (core curriculum) which is a team-taught first-year experience required for all students. Cyber World was first taught in Fall 2018 to a cohort of over 150 students from various majors at the University of New Haven. This article presents the evaluation of our Fall taught course. In detail, we compare the performance of Cyber World students to other Common Course sections that ran in parallel and conclude that despite the higher workload students performed equally well. Furthermore, we assess the students' development throughout the course with respect to their cybersecurity knowledge where our results indicate a significant gain of knowledge. Note, this article also presents the idea and topics of Cyber World; however a detailed explanation has been released previously. |
4. | Palmbach, David; Breitinger, Frank: Artifacts for detecting timestamp manipulation in NTFS on Windows and their reliability. In: Forensic Science International: Digital Investigation, 32 , pp. 300920, 2020, ISSN: 2666-2817. (Type: Journal Article | Abstract | Links | BibTeX) @article{PB20, title = {Artifacts for detecting timestamp manipulation in NTFS on Windows and their reliability}, author = {David Palmbach and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S2666281720300159}, doi = {10.1016/j.fsidi.2020.300920}, issn = {2666-2817}, year = {2020}, date = {2020-06-04}, journal = {Forensic Science International: Digital Investigation}, volume = {32}, pages = {300920}, abstract = {Timestamps have proven to be an expedient source of evidence for examiners in the reconstruction of computer crimes. Consequently, active adversaries and malware have implemented timestomping techniques (i.e., mechanisms to alter timestamps) to hide their traces. Previous research on detecting timestamp manipulation primarily focused on two artifacts: the $MFT as well as the records in the $LogFile. In this paper, we present a new use of four existing windows artifacts -- the $USNjrnl, link files, prefetch files, and Windows event logs -- that can provide valuable information during investigations and diversify the artifacts available to examiners. These artifacts contain either information about executed programs or additional timestamps which, when inconsistencies occur, can be used to prove timestamp forgery. Furthermore, we examine the reliability of artifacts being used to detect timestamp manipulation, i.e., testing their ability to retain information against users actively trying to alter or delete them. Based on our findings we conclude that none of the artifacts analyzed can withstand active exploitation.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Timestamps have proven to be an expedient source of evidence for examiners in the reconstruction of computer crimes. Consequently, active adversaries and malware have implemented timestomping techniques (i.e., mechanisms to alter timestamps) to hide their traces. Previous research on detecting timestamp manipulation primarily focused on two artifacts: the $MFT as well as the records in the $LogFile. In this paper, we present a new use of four existing windows artifacts -- the $USNjrnl, link files, prefetch files, and Windows event logs -- that can provide valuable information during investigations and diversify the artifacts available to examiners. These artifacts contain either information about executed programs or additional timestamps which, when inconsistencies occur, can be used to prove timestamp forgery. Furthermore, we examine the reliability of artifacts being used to detect timestamp manipulation, i.e., testing their ability to retain information against users actively trying to alter or delete them. Based on our findings we conclude that none of the artifacts analyzed can withstand active exploitation. |
5. | Schneider, Johannes; Breitinger, Frank: AI Forensics: Did the Artificial Intelligence System Do It? Why?. In: arXiv preprint arXiv:2005.13635, 2020. (Type: Journal Article | BibTeX) @article{schneider2020ai, title = {AI Forensics: Did the Artificial Intelligence System Do It? Why?}, author = { Johannes Schneider and Frank Breitinger}, year = {2020}, date = {2020-05-27}, journal = {arXiv preprint arXiv:2005.13635}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
6. | Wu, Tina; Breitinger, Frank; O'Shaughnessy, Stephen: Digital forensic tools: Recent advances and enhancing the status quo. In: Forensic Science International: Digital Investigation, 34 , pp. 300999, 2020, ISSN: 2666-2817. (Type: Journal Article | Abstract | Links | BibTeX) @article{WBS20, title = {Digital forensic tools: Recent advances and enhancing the status quo}, author = {Tina Wu and Frank Breitinger and Stephen O'Shaughnessy}, url = {http://www.sciencedirect.com/science/article/pii/S2666281720301864}, doi = {10.1016/j.fsidi.2020.300999}, issn = {2666-2817}, year = {2020}, date = {2020-01-01}, journal = {Forensic Science International: Digital Investigation}, volume = {34}, pages = {300999}, abstract = {Publications in the digital forensics domain frequently come with tools -- a small piece of functional software. These tools are often released to the public for others to reproduce results or use them for their own purposes. However, there has been no study on the tools to understand better what is available and what is missing. For this paper we analyzed almost 800 articles from pertinent venues from 2014 to 2019 to answer the following three questions (1) what tools (i.e., in which domains of digital forensics): have been released; (2) are they still available, maintained, and documented; and (3) are there possibilities to enhance the status quo? We found 62 different tools which we categorized according to digital forensics subfields. Only 33 of these tools were found to be publicly available, the majority of these were not maintained after development. In order to enhance the status quo, one recommendation is a centralized repository specifically for tested tools. This will require tool researchers (developers) to spend more time on code documentation and preferably develop plugins instead of stand-alone tools.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Publications in the digital forensics domain frequently come with tools -- a small piece of functional software. These tools are often released to the public for others to reproduce results or use them for their own purposes. However, there has been no study on the tools to understand better what is available and what is missing. For this paper we analyzed almost 800 articles from pertinent venues from 2014 to 2019 to answer the following three questions (1) what tools (i.e., in which domains of digital forensics): have been released; (2) are they still available, maintained, and documented; and (3) are there possibilities to enhance the status quo? We found 62 different tools which we categorized according to digital forensics subfields. Only 33 of these tools were found to be publicly available, the majority of these were not maintained after development. In order to enhance the status quo, one recommendation is a centralized repository specifically for tested tools. This will require tool researchers (developers) to spend more time on code documentation and preferably develop plugins instead of stand-alone tools. |
7. | Moia, Vitor Hugo Galhardo; Breitinger, Frank; Henriques, Marco Aurélio Amaral: The impact of excluding common blocks for approximate matching. In: Computers & Security, 89 , pp. 101676, 2019, ISSN: 0167-4048. (Type: Journal Article | Abstract | Links | BibTeX) @article{MOIA2020101676, title = {The impact of excluding common blocks for approximate matching}, author = {Vitor Hugo Galhardo Moia and Frank Breitinger and Marco Aurélio Amaral Henriques}, url = {http://www.sciencedirect.com/science/article/pii/S0167404819302159}, doi = {https://doi.org/10.1016/j.cose.2019.101676}, issn = {0167-4048}, year = {2019}, date = {2019-11-28}, journal = {Computers & Security}, volume = {89}, pages = {101676}, abstract = {Approximate matching functions allow the identification of similarity (bytewise level) in a very efficient way, by creating and comparing compact representations of objects (a.k.a digests). However, many similarity matches occur due to common data that repeats over many different files and consist of inner structure, header and footer information, color tables, font specifications, etc.; data created by applications and not generated by users. Most of the times, this sort of information is less relevant from an investigator perspective and should be avoided. In this work, we show how the common data can be identified and filtered out by using approximate matching, as well as how they are spread over different file types and their frequency. We assess the impact on similarity when removing it (i.e., in the number of matches) and the effects on performance. Our results show that for a small price on performance, a reduction about 87% on the number of matches can be achieved when removing such data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Approximate matching functions allow the identification of similarity (bytewise level) in a very efficient way, by creating and comparing compact representations of objects (a.k.a digests). However, many similarity matches occur due to common data that repeats over many different files and consist of inner structure, header and footer information, color tables, font specifications, etc.; data created by applications and not generated by users. Most of the times, this sort of information is less relevant from an investigator perspective and should be avoided. In this work, we show how the common data can be identified and filtered out by using approximate matching, as well as how they are spread over different file types and their frequency. We assess the impact on similarity when removing it (i.e., in the number of matches) and the effects on performance. Our results show that for a small price on performance, a reduction about 87% on the number of matches can be achieved when removing such data. |
8. | Breitinger, Frank; Tully-Doyle, Ryan; Hassenfeldt, Courtney: A survey on smartphone user's security choices, awareness and education. In: Computers & Security, 88 , pp. 101647, 2019, ISSN: 0167-4048. (Type: Journal Article | Abstract | Links | BibTeX) @article{BTH20, title = {A survey on smartphone user's security choices, awareness and education}, author = {Frank Breitinger and Ryan Tully-Doyle and Courtney Hassenfeldt}, url = {http://www.sciencedirect.com/science/article/pii/S0167404819301919}, doi = {10.1016/j.cose.2019.101647}, issn = {0167-4048}, year = {2019}, date = {2019-10-11}, journal = {Computers & Security}, volume = {88}, pages = {101647}, abstract = {Smartphones contain a significant amount of personal data. Additionally, they are always in the user's possession, which allows them to be abused for tracking (e.g., GPS, Bluetooth or WiFi tracking). In order to not reveal private information, smartphone users should secure their devices by setting lock screen protection, using third party security applications, and choosing appropriate security settings (often, default settings are inadequate). In this paper, we mount a survey to explore user choices, awareness and education with respect to cybersecurity. In comparison with prior work, we take the user's cybersecurity familiarity into consideration in the analysis of user practices as well as have a strong focus on the younger generations, Y and Z. Our survey findings suggest that most users have appropriate lock screen settings to protect their phones from physical access; however, they disregard other security best practices, e.g., not using a VPN when connecting to a public WiFi or turning off unused features (regardless of level of expertise). Compared to desktop computers, smartphones are less secured and fewer third party security products are installed.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Smartphones contain a significant amount of personal data. Additionally, they are always in the user's possession, which allows them to be abused for tracking (e.g., GPS, Bluetooth or WiFi tracking). In order to not reveal private information, smartphone users should secure their devices by setting lock screen protection, using third party security applications, and choosing appropriate security settings (often, default settings are inadequate). In this paper, we mount a survey to explore user choices, awareness and education with respect to cybersecurity. In comparison with prior work, we take the user's cybersecurity familiarity into consideration in the analysis of user practices as well as have a strong focus on the younger generations, Y and Z. Our survey findings suggest that most users have appropriate lock screen settings to protect their phones from physical access; however, they disregard other security best practices, e.g., not using a VPN when connecting to a public WiFi or turning off unused features (regardless of level of expertise). Compared to desktop computers, smartphones are less secured and fewer third party security products are installed. |
9. | Moia, Vitor Hugo Galhardo; Breitinger, Frank; Henriques, Marco Aurélio Amaral: Understanding the effects of removing common blocks on Approximate Matching scores under different scenarios for digital forensic investigations. In: XIX Brazilian Symposium on information and computational systems security, Brazilian Computer Society (SBC) S~Apounds o Paulo-SP, Brazil 2019, (bf Best Paper Award). (Type: Inproceedings | Abstract | BibTeX) @inproceedings{MBH19, title = {Understanding the effects of removing common blocks on Approximate Matching scores under different scenarios for digital forensic investigations}, author = {Vitor Hugo Galhardo Moia AND Frank Breitinger AND Marco Aurélio Amaral Henriques}, year = {2019}, date = {2019-09-05}, booktitle = {XIX Brazilian Symposium on information and computational systems security}, organization = {Brazilian Computer Society (SBC) S~Apounds o Paulo-SP, Brazil}, abstract = {Finding similarity in digital forensics investigations can be assisted with the use of Approximate Matching (AM) functions. These algorithms create small and compact representations of objects (similar to hashes) which can be compared to identify similarity. However, often results are biased due to common blocks (data structures found in many different files regardless of content). In this paper, we evaluate the precision and recall metrics for AM functions when removing common blocks. In detail, we analyze how the similarity score changes and impacts different investigation scenarios. Results show that many irrelevant matches can be filtered out and that a new interpretation of the score allows a better similarity detection.}, note = {bf Best Paper Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Finding similarity in digital forensics investigations can be assisted with the use of Approximate Matching (AM) functions. These algorithms create small and compact representations of objects (similar to hashes) which can be compared to identify similarity. However, often results are biased due to common blocks (data structures found in many different files regardless of content). In this paper, we evaluate the precision and recall metrics for AM functions when removing common blocks. In detail, we analyze how the similarity score changes and impacts different investigation scenarios. Results show that many irrelevant matches can be filtered out and that a new interpretation of the score allows a better similarity detection. |
10. | Wu, Tina; Breitinger, Frank; Baggili, Ibrahim: IoT Ignorance is Digital Forensics Research Bliss: A Survey to Understand IoT Forensics Definitions, Challenges and Future Research Directions. In: Proceedings of the 14th International Conference on Availability, Reliability and Security, pp. 46:1–46:15, ACM, Canterbury, CA, United Kingdom, 2019, ISBN: 978-1-4503-7164-3. (Type: Inproceedings | Links | BibTeX) @inproceedings{WBB19, title = {IoT Ignorance is Digital Forensics Research Bliss: A Survey to Understand IoT Forensics Definitions, Challenges and Future Research Directions}, author = {Tina Wu and Frank Breitinger and Ibrahim Baggili}, url = {http://doi.acm.org/10.1145/3339252.3340504}, doi = {10.1145/3339252.3340504}, isbn = {978-1-4503-7164-3}, year = {2019}, date = {2019-08-25}, booktitle = {Proceedings of the 14th International Conference on Availability, Reliability and Security}, pages = {46:1--46:15}, publisher = {ACM}, address = {Canterbury, CA, United Kingdom}, series = {ARES '19}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
11. | Przyborski, Kristen; Breitinger, Frank; Beck, Lauren; Harichandran, Ronald S: "CyberWorld" as a Theme for a University-wide First-year Common Course. In: 2019 ASEE Annual Conference & Exposition (Presented at Cyber Technology), 2019. (Type: Journal Article | Links | BibTeX) @article{przyborski2019cyberworld, title = {"CyberWorld" as a Theme for a University-wide First-year Common Course}, author = {Kristen Przyborski and Frank Breitinger and Lauren Beck and Ronald S Harichandran}, url = {https://peer.asee.org/31923}, year = {2019}, date = {2019-06-18}, journal = {2019 ASEE Annual Conference & Exposition (Presented at Cyber Technology)}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
12. | Liebler, Lorenz; Schmitt, Patrick; Baier, Harald; Breitinger, Frank: On efficiency of artifact lookup strategies in digital forensics. In: Digital Investigation, 28 , pp. S116 - S125, 2019, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{LSB19, title = {On efficiency of artifact lookup strategies in digital forensics}, author = {Lorenz Liebler and Patrick Schmitt and Harald Baier and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287619300301}, doi = {https://doi.org/10.1016/j.diin.2019.01.020}, issn = {1742-2876}, year = {2019}, date = {2019-04-24}, journal = {Digital Investigation}, volume = {28}, pages = {S116 - S125}, abstract = {In recent years different strategies have been proposed to handle the problem of ever-growing digital forensic databases. One concept to deal with this data overload is data reduction, which essentially means to separate the wheat from the chaff, e.g., to filter in forensically relevant data. A prominent technique in the context of data reduction are hash-based solutions. Data reduction is achieved because hash values (of possibly large data input) are much smaller than the original input. Today's approaches of storing hash-based data fragments reach from large scale multithreaded databases to simple Bloom filter representations. One main focus was put on the field of approximate matching, where sorting is a problem due to the fuzzy nature of the approximate hashes. A crucial step during digital forensic analysis is to achieve fast query times during lookup (e.g., against a blacklist), especially in the scope of small or ordinary resource availability. However, a comparison of different database and lookup approaches is considerably hard, as most techniques partially differ in considered use-case and integrated features, respectively. In this work we discuss, reassess and extend three widespread lookup strategies suitable for storing hash-based fragments: (1) Hashdatabase for hash-based carving (hashdb), (2) hierarchical Bloom filter trees (hbft) and (3) flat hash maps (fhmap). We outline the capabilities of the different approaches, integrate new extensions, discuss possible features and perform a detailed evaluation with a special focus on runtime efficiency. Our results reveal major advantages for fhmap in case of runtime performance and applicability. hbft showed a comparable runtime efficiency in case of lookups, but hbft suffers from pitfalls with respect to extensibility and maintenance. Finally, hashdb performs worst in case of a single core environment in all evaluation scenarios. However, hashdb is the only candidate which offers full parallelization capabilities, transactional features, and a Single-level storage.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In recent years different strategies have been proposed to handle the problem of ever-growing digital forensic databases. One concept to deal with this data overload is data reduction, which essentially means to separate the wheat from the chaff, e.g., to filter in forensically relevant data. A prominent technique in the context of data reduction are hash-based solutions. Data reduction is achieved because hash values (of possibly large data input) are much smaller than the original input. Today's approaches of storing hash-based data fragments reach from large scale multithreaded databases to simple Bloom filter representations. One main focus was put on the field of approximate matching, where sorting is a problem due to the fuzzy nature of the approximate hashes. A crucial step during digital forensic analysis is to achieve fast query times during lookup (e.g., against a blacklist), especially in the scope of small or ordinary resource availability. However, a comparison of different database and lookup approaches is considerably hard, as most techniques partially differ in considered use-case and integrated features, respectively. In this work we discuss, reassess and extend three widespread lookup strategies suitable for storing hash-based fragments: (1) Hashdatabase for hash-based carving (hashdb), (2) hierarchical Bloom filter trees (hbft) and (3) flat hash maps (fhmap). We outline the capabilities of the different approaches, integrate new extensions, discuss possible features and perform a detailed evaluation with a special focus on runtime efficiency. Our results reveal major advantages for fhmap in case of runtime performance and applicability. hbft showed a comparable runtime efficiency in case of lookups, but hbft suffers from pitfalls with respect to extensibility and maintenance. Finally, hashdb performs worst in case of a single core environment in all evaluation scenarios. However, hashdb is the only candidate which offers full parallelization capabilities, transactional features, and a Single-level storage. |
13. | Ricci, Joseph; Baggili, Ibrahim; Breitinger, Frank: Blockchain-Based Distributed Cloud Storage Digital Forensics: Where's the Beef?. In: IEEE Security & is Privacy, 17 (1), pp. 34-42, 2019, ISSN: 1540-7993. (Type: Journal Article | Abstract | Links | BibTeX) @article{RBB19, title = {Blockchain-Based Distributed Cloud Storage Digital Forensics: Where's the Beef?}, author = {Joseph Ricci and Ibrahim Baggili and Frank Breitinger}, doi = {10.1109/MSEC.2018.2875877}, issn = {1540-7993}, year = {2019}, date = {2019-01-01}, journal = {IEEE Security & is Privacy}, volume = {17}, number = {1}, pages = {34-42}, abstract = {The current state of the art in digital forensics has primarily focused on the acquisition of data from cloud storage. Here, we present a new challenge in digital forensics: blockchain-based distributed cloud storage, using STORJ as a technology example.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The current state of the art in digital forensics has primarily focused on the acquisition of data from cloud storage. Here, we present a new challenge in digital forensics: blockchain-based distributed cloud storage, using STORJ as a technology example. |
14. | Debinski, Mark; Breitinger, Frank; Mohan, Parvathy: Timeline2GUI: A Log2Timeline CSV parser and training scenarios. In: Digital Investigation, 28 , pp. 34 - 43, 2018, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{Debinski2018, title = {Timeline2GUI: A Log2Timeline CSV parser and training scenarios}, author = {Mark Debinski and Frank Breitinger and Parvathy Mohan}, url = {http://www.sciencedirect.com/science/article/pii/S1742287618303232}, doi = {10.1016/j.diin.2018.12.004}, issn = {1742-2876}, year = {2018}, date = {2018-12-31}, journal = {Digital Investigation}, volume = {28}, pages = {34 - 43}, abstract = {Crimes involving digital evidence are getting more complex due to the increasing storage capacities and utilization of devices. Event reconstruction (i.e., understanding the timeline) is an essential step for investigators to understand a case where a prominent tool is Log2Timeline (a tool that creates super timelines which is a combination of several log files and events throughout a system). While these timelines provide great evidence and help to understand a case, they are complex and require tools as well as training scenarios. In this paper we present Timeline2GUI an easy-to-use python implementation to analyze CSV log files create by Log2Timeline. Additionally, we present three training scenarios -- beginner, intermediate and advanced -- to practice timeline analysis skills as well as familiarity with visualization tools. Lastly, we provide a comprehensive overview of tools.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Crimes involving digital evidence are getting more complex due to the increasing storage capacities and utilization of devices. Event reconstruction (i.e., understanding the timeline) is an essential step for investigators to understand a case where a prominent tool is Log2Timeline (a tool that creates super timelines which is a combination of several log files and events throughout a system). While these timelines provide great evidence and help to understand a case, they are complex and require tools as well as training scenarios. In this paper we present Timeline2GUI an easy-to-use python implementation to analyze CSV log files create by Log2Timeline. Additionally, we present three training scenarios -- beginner, intermediate and advanced -- to practice timeline analysis skills as well as familiarity with visualization tools. Lastly, we provide a comprehensive overview of tools. |
15. | Haigh, Trevor; Breitinger, Frank; Baggili, Ibrahim: If I Had a Million Cryptos: Cryptowallet Application Analysis and a Trojan Proof-of-Concept. In: Breitinger, Frank ; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 45–65, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-05487-8, (bf Best Paper Award). (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{HBB19, title = {If I Had a Million Cryptos: Cryptowallet Application Analysis and a Trojan Proof-of-Concept}, author = {Trevor Haigh AND Frank Breitinger AND Ibrahim Baggili}, editor = {Breitinger, Frank and Baggili, Ibrahim}, doi = {10.1007/978-3-030-05487-8_3}, isbn = {978-3-030-05487-8}, year = {2018}, date = {2018-12-30}, booktitle = {Digital Forensics and Cyber Crime}, pages = {45--65}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {Cryptocurrencies have gained wide adoption by enthusiasts and investors. In this work, we examine seven different Android cryptowallet applications for forensic artifacts, but we also assess their security against tampering and reverse engineering. Some of the biggest benefits of cryptocurrency is its security and relative anonymity. For this reason it is vital that wallet applications share the same properties. Our work, however, indicates that this is not the case. Five of the seven applications we tested do not implement basic security measures against reverse engineering. Three of the applications stored sensitive information, like wallet private keys, insecurely and one was able to be decrypted with some effort. One of the applications did not require root access to retrieve the data. We were also able to implement a proof-of-concept trojan which exemplifies how a malicious actor may exploit the lack of security in these applications and exfiltrate user data and cryptocurrency.}, note = {bf Best Paper Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Cryptocurrencies have gained wide adoption by enthusiasts and investors. In this work, we examine seven different Android cryptowallet applications for forensic artifacts, but we also assess their security against tampering and reverse engineering. Some of the biggest benefits of cryptocurrency is its security and relative anonymity. For this reason it is vital that wallet applications share the same properties. Our work, however, indicates that this is not the case. Five of the seven applications we tested do not implement basic security measures against reverse engineering. Three of the applications stored sensitive information, like wallet private keys, insecurely and one was able to be decrypted with some effort. One of the applications did not require root access to retrieve the data. We were also able to implement a proof-of-concept trojan which exemplifies how a malicious actor may exploit the lack of security in these applications and exfiltrate user data and cryptocurrency. |
16. | Schmicker, Robert; Breitinger, Frank; Baggili, Ibrahim: AndroParse - An Android Feature Extraction Framework and Dataset. In: Breitinger, Frank ; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 66–88, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-05487-8. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{SBB19, title = {AndroParse - An Android Feature Extraction Framework and Dataset}, author = {Robert Schmicker AND Frank Breitinger AND Ibrahim Baggili}, editor = {Breitinger, Frank and Baggili, Ibrahim}, doi = {10.1007/978-3-030-05487-8_4}, isbn = {978-3-030-05487-8}, year = {2018}, date = {2018-12-30}, booktitle = {Digital Forensics and Cyber Crime}, pages = {66--88}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {Android malware has become a major challenge. As a consequence, practitioners and researchers spend a significant time analyzing Android applications (APK). A common procedure (especially for data scientists) is to extract features such as permissions, APIs or strings which can then be analyzed. Current state of the art tools have three major issues: (1) a single tool cannot extract all the significant features used by scientists and practitioners (2) Current tools are not designed to be extensible and (3) Existing parsers can be timely as they are not runtime efficient or scalable. Therefore, this work presents AndroParse which is an open-source Android parser written in Golang that currently extracts the four most common features: Permissions, APIs, Strings and Intents. AndroParse outputs JSON files as they can easily be used by most major programming languages. Constructing the parser allowed us to create an extensive feature dataset which can be accessed by our independent REST API. Our dataset currently has 67,703 benign and 46,683 malicious APK samples.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Android malware has become a major challenge. As a consequence, practitioners and researchers spend a significant time analyzing Android applications (APK). A common procedure (especially for data scientists) is to extract features such as permissions, APIs or strings which can then be analyzed. Current state of the art tools have three major issues: (1) a single tool cannot extract all the significant features used by scientists and practitioners (2) Current tools are not designed to be extensible and (3) Existing parsers can be timely as they are not runtime efficient or scalable. Therefore, this work presents AndroParse which is an open-source Android parser written in Golang that currently extracts the four most common features: Permissions, APIs, Strings and Intents. AndroParse outputs JSON files as they can easily be used by most major programming languages. Constructing the parser allowed us to create an extensive feature dataset which can be accessed by our independent REST API. Our dataset currently has 67,703 benign and 46,683 malicious APK samples. |
17. | Breitinger, Frank; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime: 10th International EAI Conference, ICDF2C 2018, New Orleans, LA, USA, September 10--12, 2018, Proceedings. Springer International Publishing, 2018, ISBN: 978-3-030-05486-1. (Type: Book | Links | BibTeX) @book{breitinger2019digital, title = {Digital Forensics and Cyber Crime: 10th International EAI Conference, ICDF2C 2018, New Orleans, LA, USA, September 10--12, 2018, Proceedings}, editor = {Frank Breitinger AND Ibrahim Baggili}, doi = {10.1007/978-3-030-05487-8}, isbn = {978-3-030-05486-1}, year = {2018}, date = {2018-12-30}, volume = {259}, publisher = {Springer International Publishing}, series = {Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering}, keywords = {}, pubstate = {published}, tppubtype = {book} } |
18. | Luciano, Laoise; Baggili, Ibrahim; Topor, Mateusz; Casey, Peter; Breitinger, Frank: Digital Forensics in the Next Five Years. In: Proceedings of the 13th International Conference on Availability, Reliability and Security, pp. 46:1–46:14, ACM, Hamburg, Germany, 2018, ISBN: 978-1-4503-6448-5. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{LBT18, title = {Digital Forensics in the Next Five Years}, author = {Laoise Luciano and Ibrahim Baggili and Mateusz Topor and Peter Casey and Frank Breitinger}, url = {http://doi.acm.org/10.1145/3230833.3232813}, doi = {10.1145/3230833.3232813}, isbn = {978-1-4503-6448-5}, year = {2018}, date = {2018-08-30}, booktitle = {Proceedings of the 13th International Conference on Availability, Reliability and Security}, pages = {46:1--46:14}, publisher = {ACM}, address = {Hamburg, Germany}, series = {ARES 2018}, abstract = {Cyber forensics has encountered major obstacles over the last decade and is at a crossroads. This paper presents data that was obtained during the National Workshop on Redefining Cyber Forensics (NWRCF) on May 23-24, 2017 supported by the National Science Foundation and organized by the University of New Haven. Qualitative and quantitative data were analyzed from twenty-four cyber forensics expert panel members. This work identified important themes that need to be addressed by the community, focusing on (1) where the domain currently is; (2) where it needs to go and; (3) steps needed to improve it. Furthermore, based on the results, we articulate (1) the biggest anticipated challenges the domain will face in the next five years; (2) the most important cyber forensics research opportunities in the next five years and; (3) the most important job-ready skills that need to be addressed by higher education curricula over the next five years. Lastly, we present the key issues and recommendations deliberated by the expert panel. Overall results indicated that a more active and coherent group needs to be formed in the cyber forensics community, with opportunities for continuous reassessment and improvement processes in place.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Cyber forensics has encountered major obstacles over the last decade and is at a crossroads. This paper presents data that was obtained during the National Workshop on Redefining Cyber Forensics (NWRCF) on May 23-24, 2017 supported by the National Science Foundation and organized by the University of New Haven. Qualitative and quantitative data were analyzed from twenty-four cyber forensics expert panel members. This work identified important themes that need to be addressed by the community, focusing on (1) where the domain currently is; (2) where it needs to go and; (3) steps needed to improve it. Furthermore, based on the results, we articulate (1) the biggest anticipated challenges the domain will face in the next five years; (2) the most important cyber forensics research opportunities in the next five years and; (3) the most important job-ready skills that need to be addressed by higher education curricula over the next five years. Lastly, we present the key issues and recommendations deliberated by the expert panel. Overall results indicated that a more active and coherent group needs to be formed in the cyber forensics community, with opportunities for continuous reassessment and improvement processes in place. |
19. | Grajeda, Cinthya; Sanchez, Laura; Baggili, Ibrahim; Clark, Devon; Breitinger, Frank: Experience constructing the Artifact Genome Project (AGP): Managing the domain's knowledge one artifact at a time. In: Digital Investigation, 26 , pp. S47 - S58, 2018, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{GSB18, title = {Experience constructing the Artifact Genome Project (AGP): Managing the domain's knowledge one artifact at a time}, author = {Cinthya Grajeda and Laura Sanchez and Ibrahim Baggili and Devon Clark and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287618302007}, doi = {10.1016/j.diin.2018.04.021}, issn = {1742-2876}, year = {2018}, date = {2018-07-15}, journal = {Digital Investigation}, volume = {26}, pages = {S47 - S58}, abstract = {While various tools have been created to assist the digital forensics community with acquiring, processing, and organizing evidence and indicating the existence of artifacts, very few attempts have been made to establish a centralized system for archiving artifacts. The Artifact Genome Project (AGP) has aimed to create the largest vetted and freely available digital forensics repository for Curated Forensic Artifacts (CuFAs). This paper details the experience of building, implementing, and maintaining such a system by sharing design decisions, lessons learned, and future work. We also discuss the impact of AGP in both the professional and academic realms of digital forensics. Our work shows promise in the digital forensics academic community to champion the effort in curating digital forensic artifacts by integrating AGP into courses, research endeavors, and collaborative projects.}, keywords = {}, pubstate = {published}, tppubtype = {article} } While various tools have been created to assist the digital forensics community with acquiring, processing, and organizing evidence and indicating the existence of artifacts, very few attempts have been made to establish a centralized system for archiving artifacts. The Artifact Genome Project (AGP) has aimed to create the largest vetted and freely available digital forensics repository for Curated Forensic Artifacts (CuFAs). This paper details the experience of building, implementing, and maintaining such a system by sharing design decisions, lessons learned, and future work. We also discuss the impact of AGP in both the professional and academic realms of digital forensics. Our work shows promise in the digital forensics academic community to champion the effort in curating digital forensic artifacts by integrating AGP into courses, research endeavors, and collaborative projects. |
20. | Ricci, Joseph ; Breitinger, Frank ; Baggili, Ibrahim : Survey results on adults and cybersecurity education. In: Education and Information Technologies, pp. 1–19, 2018, ISSN: 1360-2357. (Type: Journal Article | Abstract | Links | BibTeX) @article{ricci2018survey, title = {Survey results on adults and cybersecurity education}, author = {Ricci, Joseph and Breitinger, Frank and Baggili, Ibrahim}, url = {https://doi.org/10.1007/s10639-018-9765-8}, doi = {10.1007/s10639-018-9765-8}, issn = {1360-2357}, year = {2018}, date = {2018-07-11}, journal = {Education and Information Technologies}, pages = {1--19}, abstract = {Cyberattacks and identity theft are common problems nowadays where researchers often say that humans are the weakest link the security chain. Therefore, this survey focused on analyzing the interest for adults for `cyber threat eduction seminars', e.g., how to project themselves and their loved ones. Specifically, we asked questions to understand a possible audience, willingness for paying / time commitment, or fields of interest as well as background and previous training experience. The survey was conducted in late 2016 and taken by 233 participants. The results show that many are worried about cyber threats and about their children exploring the online domain. However, seminars do not seem to be a priority as many individuals were only willing to spend 1-1.5h on seminars.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cyberattacks and identity theft are common problems nowadays where researchers often say that humans are the weakest link the security chain. Therefore, this survey focused on analyzing the interest for adults for `cyber threat eduction seminars', e.g., how to project themselves and their loved ones. Specifically, we asked questions to understand a possible audience, willingness for paying / time commitment, or fields of interest as well as background and previous training experience. The survey was conducted in late 2016 and taken by 233 participants. The results show that many are worried about cyber threats and about their children exploring the online domain. However, seminars do not seem to be a priority as many individuals were only willing to spend 1-1.5h on seminars. |
21. | Liebler, Lorenz; Breitinger, Frank: mrsh-mem: Approximate Matching on Raw Memory Dumps. In: 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF), pp. 47-64, 2018. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{LB18, title = {mrsh-mem: Approximate Matching on Raw Memory Dumps}, author = {Lorenz Liebler and Frank Breitinger}, doi = {10.1109/IMF.2018.00011}, year = {2018}, date = {2018-05-09}, booktitle = {2018 11th International Conference on IT Security Incident Management IT Forensics (IMF)}, pages = {47-64}, abstract = {This paper presents the fusion of two subdomains of digital forensics: (1) raw memory analysis and (2) approximate matching. Specifically, this paper describes a prototype implementation named MRSH-MEM that allows to compare hard drive images as well as memory dumps and therefore can answer the question if a particular program (installed on a hard drive) is currently running / loaded in memory. To answer this question, we only require both dumps or access to a public repository which provides the binaries to be tested. For our prototype, we modified an existing approximate matching algorithm named MRSH-NET and combined it with approxis, an approximate disassembler. Recent literature claims that approximate matching techniques are slow and hardly applicable to the field of memory forensics. Especially legitimate changes to executables in memory caused by the loader itself prevent the application of current bytewise approximate matching techniques. Our approach lowers the impact of modified code in memory and shows a good computational performance. During our experiments, we show how an investigator can leverage meaningful insights by combining data gained from a hard disk image and raw memory dumps with a practicability runtime performance. Lastly, our current implementation will be integrable into the Volatility memory forensics framework and we introduce new possibilities for providing data driven cross validation functions. Our current proof of concept implementation supports Linux based raw memory dumps.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents the fusion of two subdomains of digital forensics: (1) raw memory analysis and (2) approximate matching. Specifically, this paper describes a prototype implementation named MRSH-MEM that allows to compare hard drive images as well as memory dumps and therefore can answer the question if a particular program (installed on a hard drive) is currently running / loaded in memory. To answer this question, we only require both dumps or access to a public repository which provides the binaries to be tested. For our prototype, we modified an existing approximate matching algorithm named MRSH-NET and combined it with approxis, an approximate disassembler. Recent literature claims that approximate matching techniques are slow and hardly applicable to the field of memory forensics. Especially legitimate changes to executables in memory caused by the loader itself prevent the application of current bytewise approximate matching techniques. Our approach lowers the impact of modified code in memory and shows a good computational performance. During our experiments, we show how an investigator can leverage meaningful insights by combining data gained from a hard disk image and raw memory dumps with a practicability runtime performance. Lastly, our current implementation will be integrable into the Volatility memory forensics framework and we introduce new possibilities for providing data driven cross validation functions. Our current proof of concept implementation supports Linux based raw memory dumps. |
22. | Lillis, David; Breitinger, Frank; Scanlon, Mark: Expediting MRSH-v2 Approximate Matching with Hierarchical Bloom Filter Trees. In: Matoušek, Petr ; Schmiedecker, Martin (Ed.): Digital Forensics and Cyber Crime, pp. 144–157, Springer International Publishing, Cham, 2018, ISBN: 978-3-319-73697-6, (bf Best Paper Award). (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{LBS18, title = {Expediting MRSH-v2 Approximate Matching with Hierarchical Bloom Filter Trees}, author = {David Lillis AND Frank Breitinger AND Mark Scanlon}, editor = {Matoušek, Petr and Schmiedecker, Martin}, doi = {10.1007/978-3-319-73697-6_11}, isbn = {978-3-319-73697-6}, year = {2018}, date = {2018-01-06}, booktitle = {Digital Forensics and Cyber Crime}, pages = {144--157}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {Perhaps the most common task encountered by digital forensic investigators consists of searching through a seized device for pertinent data. Frequently, an investigator will be in possession of a collection of ``known-illegal'' files (e.g. a collection of child pornographic images) and will seek to find whether copies of these are stored on the seized drive. Traditional hash matching techniques can efficiently find files that precisely match. However, these will fail in the case of merged files, embedded files, partial files, or if a file has been changed in any way.}, note = {bf Best Paper Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Perhaps the most common task encountered by digital forensic investigators consists of searching through a seized device for pertinent data. Frequently, an investigator will be in possession of a collection of ``known-illegal'' files (e.g. a collection of child pornographic images) and will seek to find whether copies of these are stored on the seized drive. Traditional hash matching techniques can efficiently find files that precisely match. However, these will fail in the case of merged files, embedded files, partial files, or if a file has been changed in any way. |
23. | Knieriem, Brandon; Zhang, Xiaolu; Levine, Philip; Breitinger, Frank; Baggili, Ibrahim: An Overview of the Usage of Default Passwords. In: Matoušek, Petr ; Schmiedecker, Martin (Ed.): Digital Forensics and Cyber Crime, pp. 195–203, Springer International Publishing, Cham, 2018, ISBN: 978-3-319-73697-6. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{KZL18, title = {An Overview of the Usage of Default Passwords}, author = {Brandon Knieriem AND Xiaolu Zhang AND Philip Levine AND Frank Breitinger AND Ibrahim Baggili}, editor = {Matoušek, Petr and Schmiedecker, Martin}, doi = {10.1007/978-3-319-73697-6_15}, isbn = {978-3-319-73697-6}, year = {2018}, date = {2018-01-06}, booktitle = {Digital Forensics and Cyber Crime}, pages = {195--203}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {The recent Mirai botnet attack demonstrated the danger of using default passwords and showed it is still a major problem. In this study we investigated several common applications and their password policies. Specifically, we analyzed if these applications: (1) have default passwords or (2) allow the user to set a weak password (i.e., they do not properly enforce a password policy). Our study shows that default passwords are still a significant problem: 61% of applications inspected initially used a default or blank password. When changing the password, 58% allowed a blank password, 35% allowed a weak password of 1 character.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The recent Mirai botnet attack demonstrated the danger of using default passwords and showed it is still a major problem. In this study we investigated several common applications and their password policies. Specifically, we analyzed if these applications: (1) have default passwords or (2) allow the user to set a weak password (i.e., they do not properly enforce a password policy). Our study shows that default passwords are still a significant problem: 61% of applications inspected initially used a default or blank password. When changing the password, 58% allowed a blank password, 35% allowed a weak password of 1 character. |
24. | Meffert, Christopher; Clark, Devon; Baggili, Ibrahim; Breitinger, Frank: Forensic State Acquisition from Internet of Things (FSAIoT): A General Framework and Practical Approach for IoT Forensics Through IoT Device State Acquisition. In: Proceedings of the 12th International Conference on Availability, Reliability and Security, pp. 56:1–56:11, ACM, Reggio Calabria, Italy, 2017, ISBN: 978-1-4503-5257-4. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{MCBB17, title = {Forensic State Acquisition from Internet of Things (FSAIoT): A General Framework and Practical Approach for IoT Forensics Through IoT Device State Acquisition}, author = {Christopher Meffert AND Devon Clark AND Ibrahim Baggili AND Frank Breitinger}, url = {http://doi.acm.org/10.1145/3098954.3104053}, doi = {10.1145/3098954.3104053}, isbn = {978-1-4503-5257-4}, year = {2017}, date = {2017-09-01}, booktitle = {Proceedings of the 12th International Conference on Availability, Reliability and Security}, pages = {56:1--56:11}, publisher = {ACM}, address = {Reggio Calabria, Italy}, series = {ARES '17}, abstract = {IoT device forensics is a difficult problem given that manufactured IoT devices are not standardized, many store little to no historical data, and are always connected; making them extremely volatile. The goal of this paper was to address these challenges by presenting a primary account for a general framework and practical approach we term Forensic State Acquisition from Internet of Things (FSAIoT). We argue that by leveraging the acquisition of the state of IoT devices (e.g. if an IoT lock is open or locked), it becomes possible to paint a clear picture of events that have occurred. To this end, FSAIoT consists of a centralized Forensic State Acquisition Controller (FSAC) employed in three state collection modes: controller to IoT device, controller to cloud, and controller to controller. We present a proof of concept implementation using openHAB -- a device agnostic open source IoT device controller -- and self-created scripts, to resemble a FSAC implementation. Our proof of concept employed an Insteon IP Camera as a controller to device test, an Insteon Hub as a controller to controller test, and a nest thermostat for a a controller to cloud test. Our findings show that it is possible to practically pull forensically relevant state data from IoT devices. Future work and open research problems are shared.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } IoT device forensics is a difficult problem given that manufactured IoT devices are not standardized, many store little to no historical data, and are always connected; making them extremely volatile. The goal of this paper was to address these challenges by presenting a primary account for a general framework and practical approach we term Forensic State Acquisition from Internet of Things (FSAIoT). We argue that by leveraging the acquisition of the state of IoT devices (e.g. if an IoT lock is open or locked), it becomes possible to paint a clear picture of events that have occurred. To this end, FSAIoT consists of a centralized Forensic State Acquisition Controller (FSAC) employed in three state collection modes: controller to IoT device, controller to cloud, and controller to controller. We present a proof of concept implementation using openHAB -- a device agnostic open source IoT device controller -- and self-created scripts, to resemble a FSAC implementation. Our proof of concept employed an Insteon IP Camera as a controller to device test, an Insteon Hub as a controller to controller test, and a nest thermostat for a a controller to cloud test. Our findings show that it is possible to practically pull forensically relevant state data from IoT devices. Future work and open research problems are shared. |
25. | Grajeda, Cinthya; Breitinger, Frank; Baggili, Ibrahim: Availability of datasets for digital forensics -- And what is missing. In: Digital Investigation, 22, Supplement , pp. S94 - S105, 2017, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{MBB17a, title = {Availability of datasets for digital forensics -- And what is missing}, author = {Cinthya Grajeda and Frank Breitinger and Ibrahim Baggili}, url = {http://www.sciencedirect.com/science/article/pii/S1742287617301913}, doi = {10.1016/j.diin.2017.06.004}, issn = {1742-2876}, year = {2017}, date = {2017-08-05}, journal = {Digital Investigation}, volume = {22, Supplement}, pages = {S94 - S105}, abstract = {This paper targets two main goals. First, we want to provide an overview of available datasets that can be used by researchers and where to find them. Second, we want to stress the importance of sharing datasets to allow researchers to replicate results and improve the state of the art. To answer the first goal, we analyzed 715 peer-reviewed research articles from 2010 to 2015 with focus and relevance to digital forensics to see what datasets are available and focused on three major aspects: (1) the origin of the dataset (e.g., real world vs. synthetic), (2) if datasets were released by researchers and (3) the types of datasets that exist. Additionally, we broadened our results to include the outcome of online search results. We also discuss what we think is missing. Overall, our results show that the majority of datasets are experiment generated (56.4%) followed by real world data (36.7%). On the other hand, 54.4% of the articles use existing datasets while the rest created their own. In the latter case, only 3.8% actually released their datasets. Finally, we conclude that there are many datasets for use out there but finding them can be challenging.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper targets two main goals. First, we want to provide an overview of available datasets that can be used by researchers and where to find them. Second, we want to stress the importance of sharing datasets to allow researchers to replicate results and improve the state of the art. To answer the first goal, we analyzed 715 peer-reviewed research articles from 2010 to 2015 with focus and relevance to digital forensics to see what datasets are available and focused on three major aspects: (1) the origin of the dataset (e.g., real world vs. synthetic), (2) if datasets were released by researchers and (3) the types of datasets that exist. Additionally, we broadened our results to include the outcome of online search results. We also discuss what we think is missing. Overall, our results show that the majority of datasets are experiment generated (56.4%) followed by real world data (36.7%). On the other hand, 54.4% of the articles use existing datasets while the rest created their own. In the latter case, only 3.8% actually released their datasets. Finally, we conclude that there are many datasets for use out there but finding them can be challenging. |
26. | Denton, George; Karpisek, Filip; Breitinger, Frank; Baggili, Ibrahim: Leveraging the SRTP protocol for over-the-network memory acquisition of a GE Fanuc Series 90-30. In: Digital Investigation, 22, Supplement , pp. S26 - S38, 2017, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{DKBB17, title = {Leveraging the SRTP protocol for over-the-network memory acquisition of a GE Fanuc Series 90-30}, author = {George Denton and Filip Karpisek and Frank Breitinger and Ibrahim Baggili}, url = {http://www.sciencedirect.com/science/article/pii/S1742287617301925}, doi = {10.1016/j.diin.2017.06.005}, issn = {1742-2876}, year = {2017}, date = {2017-08-05}, journal = {Digital Investigation}, volume = {22, Supplement}, pages = {S26 - S38}, abstract = {Abstract Programmable Logic Controllers (PLCs) are common components implemented across many industries such as manufacturing, water management, travel, aerospace and hospitals to name a few. Given their broad deployment in critical systems, they became and still are a common target for cyber attacks; the most prominent one being Stuxnet. Often PLCs (especially older ones) are only protected by an outer line of defense (e.g., a firewall) but once an attacker gains access to the system or the network, there might not be any other defense layers. In this scenario, a forensic investigator should not rely on the existing software as it might have been compromised. Therefore, we reverse engineered the GE-SRTP network protocol using a GE Fanuc Series 90-30 PLC and provide two major contributions: We first describe the Service Request Transport protocol (GE-SRTP) which was invented by General Electric (GE) and is used by many of their Ethernet connected controllers. Note, to the best of our knowledge, prior to this work, no publicly available documentation on the protocol was available affording users' security by obscurity. Second, based on our understanding of the protocol, we implemented a software application that allows direct network-based communication with the PLC (no intermediate server is needed). While the tool's forensic mode is harmless and only allows for reading registers, we discovered that one can manipulate/write to the registers in its default configuration, e.g., turn off the PLC, or manipulate the items/processes it controls.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract Programmable Logic Controllers (PLCs) are common components implemented across many industries such as manufacturing, water management, travel, aerospace and hospitals to name a few. Given their broad deployment in critical systems, they became and still are a common target for cyber attacks; the most prominent one being Stuxnet. Often PLCs (especially older ones) are only protected by an outer line of defense (e.g., a firewall) but once an attacker gains access to the system or the network, there might not be any other defense layers. In this scenario, a forensic investigator should not rely on the existing software as it might have been compromised. Therefore, we reverse engineered the GE-SRTP network protocol using a GE Fanuc Series 90-30 PLC and provide two major contributions: We first describe the Service Request Transport protocol (GE-SRTP) which was invented by General Electric (GE) and is used by many of their Ethernet connected controllers. Note, to the best of our knowledge, prior to this work, no publicly available documentation on the protocol was available affording users' security by obscurity. Second, based on our understanding of the protocol, we implemented a software application that allows direct network-based communication with the PLC (no intermediate server is needed). While the tool's forensic mode is harmless and only allows for reading registers, we discovered that one can manipulate/write to the registers in its default configuration, e.g., turn off the PLC, or manipulate the items/processes it controls. |
27. | Clark, Devon R; Meffert, Christopher; Baggili, Ibrahim; Breitinger, Frank: DROP (DRone Open source Parser) your drone: Forensic analysis of the DJI Phantom III. In: Digital Investigation, 22, Supplement , pp. S3 - S14, 2017, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{CMBB17, title = {DROP (DRone Open source Parser) your drone: Forensic analysis of the DJI Phantom III}, author = {Devon R. Clark and Christopher Meffert and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287617302001}, doi = {10.1016/j.diin.2017.06.013}, issn = {1742-2876}, year = {2017}, date = {2017-08-05}, journal = {Digital Investigation}, volume = {22, Supplement}, pages = {S3 - S14}, abstract = {Abstract The DJI Phantom III drone has already been used for malicious activities (to drop bombs, remote surveillance and plane watching) in 2016 and 2017. At the time of writing, DJI was the drone manufacturer with the largest market share. Our work presents the primary thorough forensic analysis of the DJI Phantom III drone, and the primary account for proprietary file structures stored by the examined drone. It also presents the forensically sound open source tool DRone Open source Parser (DROP) that parses proprietary DAT files extracted from the drone's nonvolatile internal storage. These DAT files are encrypted and encoded. The work also shares preliminary findings on TXT files, which are also proprietary, encrypted, encoded, files found on the mobile device controlling the drone. These files provided a slew of data such as GPS locations, battery, flight time, etc. By extracting data from the controlling mobile device, and the drone, we were able to correlate data and link the user to a specific device based on extracted metadata. Furthermore, results showed that the best mechanism to forensically acquire data from the tested drone is to manually extract the SD card by disassembling the drone. Our findings illustrated that the drone should not be turned on as turning it on changes data on the drone by creating a new DAT file, but may also delete stored data if the drone's internal storage is full.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract The DJI Phantom III drone has already been used for malicious activities (to drop bombs, remote surveillance and plane watching) in 2016 and 2017. At the time of writing, DJI was the drone manufacturer with the largest market share. Our work presents the primary thorough forensic analysis of the DJI Phantom III drone, and the primary account for proprietary file structures stored by the examined drone. It also presents the forensically sound open source tool DRone Open source Parser (DROP) that parses proprietary DAT files extracted from the drone's nonvolatile internal storage. These DAT files are encrypted and encoded. The work also shares preliminary findings on TXT files, which are also proprietary, encrypted, encoded, files found on the mobile device controlling the drone. These files provided a slew of data such as GPS locations, battery, flight time, etc. By extracting data from the controlling mobile device, and the drone, we were able to correlate data and link the user to a specific device based on extracted metadata. Furthermore, results showed that the best mechanism to forensically acquire data from the tested drone is to manually extract the SD card by disassembling the drone. Our findings illustrated that the drone should not be turned on as turning it on changes data on the drone by creating a new DAT file, but may also delete stored data if the drone's internal storage is full. |
28. | Zhang, Xiaolu; Baggili, Ibrahim; Breitinger, Frank: Breaking into the vault: privacy, security and forensic analysis of android vault applications. In: Computers & Security, 70 , pp. 516 - 531, 2017, ISSN: 0167-4048. (Type: Journal Article | Abstract | Links | BibTeX) @article{ZBB17, title = {Breaking into the vault: privacy, security and forensic analysis of android vault applications}, author = {Xiaolu Zhang and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S0167404817301529}, doi = {10.1016/j.cose.2017.07.011}, issn = {0167-4048}, year = {2017}, date = {2017-08-02}, journal = {Computers & Security}, volume = {70}, pages = {516 - 531}, abstract = {Abstract In this work we share the first account for the forensic analysis, security and privacy of Android vault applications. Vaults are designed to be privacy enhancing as they allow users to hide personal data but may also be misused to hide incriminating files. Our work has already helped law enforcement in the state of Connecticut to reconstruct 66 incriminating images and 18 videos in a single criminal case. We present case studies and results from analyzing 18 Android vault applications (accounting for nearly 220 million downloads from the Google Play store) by reverse engineering them and examining the forensic artifacts they produce. Our results showed that Image 1 obfuscated their code and Image 2 applications used native libraries hindering the reverse engineering process of these applications. However, we still recovered data from the applications without root access to the Android device as we were able to ascertain hidden data on the device without rooting for Image 3 of the applications. Image 4 of the vault applications were found to not encrypt photos they stored, and Image 5 were found to not encrypt videos. Image 6 of the applications were found to store passwords in cleartext. We were able to also implement a swap attack on Image 7 applications where we achieved unauthorized access to the data by swapping the files that contained the password with a self-created one. In some cases, our findings illustrate unfavorable security implementations of privacy enhancing applications, but also showcase practical mechanisms for investigators to gain access to data of evidentiary value. In essence, we broke into the vaults.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract In this work we share the first account for the forensic analysis, security and privacy of Android vault applications. Vaults are designed to be privacy enhancing as they allow users to hide personal data but may also be misused to hide incriminating files. Our work has already helped law enforcement in the state of Connecticut to reconstruct 66 incriminating images and 18 videos in a single criminal case. We present case studies and results from analyzing 18 Android vault applications (accounting for nearly 220 million downloads from the Google Play store) by reverse engineering them and examining the forensic artifacts they produce. Our results showed that Image 1 obfuscated their code and Image 2 applications used native libraries hindering the reverse engineering process of these applications. However, we still recovered data from the applications without root access to the Android device as we were able to ascertain hidden data on the device without rooting for Image 3 of the applications. Image 4 of the vault applications were found to not encrypt photos they stored, and Image 5 were found to not encrypt videos. Image 6 of the applications were found to store passwords in cleartext. We were able to also implement a swap attack on Image 7 applications where we achieved unauthorized access to the data by swapping the files that contained the password with a self-created one. In some cases, our findings illustrate unfavorable security implementations of privacy enhancing applications, but also showcase practical mechanisms for investigators to gain access to data of evidentiary value. In essence, we broke into the vaults. |
29. | Moore, Jason; Baggili, Ibrahim; Breitinger, Frank: Find Me If You Can: Mobile GPS Mapping Applications Forensics Analysis & SNAVP The Open Source, Modular, Extensible Parser. In: Journal of Digital Forensics, Security and Law (JDFSL), 12 (1), pp. 7, 2017. (Type: Journal Article | Links | BibTeX) @article{MBB17c, title = {Find Me If You Can: Mobile GPS Mapping Applications Forensics Analysis & SNAVP The Open Source, Modular, Extensible Parser}, author = {Jason Moore AND Ibrahim Baggili AND Frank Breitinger}, url = {https://doi.org/10.15394/jdfsl.2017.1414}, doi = {10.15394/jdfsl.2017.1414}, year = {2017}, date = {2017-06-13}, journal = {Journal of Digital Forensics, Security and Law (JDFSL)}, volume = {12}, number = {1}, pages = {7}, keywords = {}, pubstate = {published}, tppubtype = {article} } |
30. | Jeong, Doowon; Breitinger, Frank; Kang, Hari; Lee, Sangjin: Towards Syntactic Approximate Matching-A Pre-Processing Experiment. In: The Journal of Digital Forensics, Security and Law: JDFSL, 11 (2), pp. 97–110, 2016. (Type: Journal Article | Abstract | Links | BibTeX) @article{jeong2016towards, title = {Towards Syntactic Approximate Matching-A Pre-Processing Experiment}, author = {Doowon Jeong AND Frank Breitinger AND Hari Kang AND Sangjin Lee}, url = {https://doi.org/10.15394/jdfsl.2016.1381}, doi = {10.15394/jdfsl.2016.1381}, year = {2016}, date = {2016-12-26}, journal = {The Journal of Digital Forensics, Security and Law: JDFSL}, volume = {11}, number = {2}, pages = {97--110}, publisher = {Association of Digital Forensics, Security and Law}, abstract = {Over the past few years, the popularity of approximate matching algorithms (a.k.a. fuzzy hashing) has increased. Especially within the area of bytewise approximate matching, several algorithms were published, tested, and improved. It has been shown that these algorithms are powerful, however they are sometimes too precise for real world investigations. That is, even very small commonalities (e.g., in the header of a file) can cause a match. While this is a desired property, it may also lead to unwanted results. In this paper, we show that by using simple pre-processing, we significantly can influence the outcome. Although our test set is based on text-based file types (cause of an easy processing), this technique can be used for other, well-documented types as well. Our results show that it can be beneficial to focus on the content of files only (depending on the use-case). While for this experiment we utilized text files, Additionally, we present a small, self-created dataset that can be used in the future for approximate matching algorithms since it is labeled (we know which files are similar and how).}, keywords = {}, pubstate = {published}, tppubtype = {article} } Over the past few years, the popularity of approximate matching algorithms (a.k.a. fuzzy hashing) has increased. Especially within the area of bytewise approximate matching, several algorithms were published, tested, and improved. It has been shown that these algorithms are powerful, however they are sometimes too precise for real world investigations. That is, even very small commonalities (e.g., in the header of a file) can cause a match. While this is a desired property, it may also lead to unwanted results. In this paper, we show that by using simple pre-processing, we significantly can influence the outcome. Although our test set is based on text-based file types (cause of an easy processing), this technique can be used for other, well-documented types as well. Our results show that it can be beneficial to focus on the content of files only (depending on the use-case). While for this experiment we utilized text files, Additionally, we present a small, self-created dataset that can be used in the future for approximate matching algorithms since it is labeled (we know which files are similar and how). |
31. | Al-khateeb, Samer; Conlan, Kevin J; Agarwal, Nitin; Baggili, Ibrahim; Breitinger, Frank: Exploring Deviant Hacker Networks (DHN) On Social Media Platforms. In: Journal of Digital Forensics, Security and Law, 11 (2), pp. 7–20, 2016. (Type: Journal Article | Abstract | Links | BibTeX) @article{SCA16, title = {Exploring Deviant Hacker Networks (DHN) On Social Media Platforms}, author = {Samer Al-khateeb AND Kevin J. Conlan AND Nitin Agarwal And Ibrahim Baggili AND Frank Breitinger}, doi = {10.15394/jdfsl.2016.1375}, year = {2016}, date = {2016-12-26}, journal = {Journal of Digital Forensics, Security and Law}, volume = {11}, number = {2}, pages = {7--20}, abstract = {Online Social Networks (OSNs) have grown exponentially over the past decade. The initial use of social media for benign purposes (e.g., to socialize with friends, browse pictures and photographs, and communicate with family members overseas) has now transitioned to include malicious activities (e.g., cybercrime, cyberterrorism, and cyberwarfare). These nefarious uses of OSNs poses a significant threat to society, and thus requires research attention. In this exploratory work, we study activities of one deviant groups: hacker groups on social media, which we term Deviant Hacker Networks (DHN). We investigated the connection between different DHNs on Twitter: how they are connected, identified the powerful nodes, which nodes sourced information, and which nodes act as "bridges" between different network components. From this, we were able to identify and articulate specific examples of DHNs communicating with each other, with the goal of committing some form of deviant act online. In our work, we also attempted to bridge the gap between the empirical study of OSNs and cyber forensics, as the growth of OSNs is now bringing these two domains together, due to OSNs continuously generating vast amounts of evidentiary data.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Online Social Networks (OSNs) have grown exponentially over the past decade. The initial use of social media for benign purposes (e.g., to socialize with friends, browse pictures and photographs, and communicate with family members overseas) has now transitioned to include malicious activities (e.g., cybercrime, cyberterrorism, and cyberwarfare). These nefarious uses of OSNs poses a significant threat to society, and thus requires research attention. In this exploratory work, we study activities of one deviant groups: hacker groups on social media, which we term Deviant Hacker Networks (DHN). We investigated the connection between different DHNs on Twitter: how they are connected, identified the powerful nodes, which nodes sourced information, and which nodes act as "bridges" between different network components. From this, we were able to identify and articulate specific examples of DHNs communicating with each other, with the goal of committing some form of deviant act online. In our work, we also attempted to bridge the gap between the empirical study of OSNs and cyber forensics, as the growth of OSNs is now bringing these two domains together, due to OSNs continuously generating vast amounts of evidentiary data. |
32. | Harichandran, Vikram S; Breitinger, Frank; Baggili, Ibrahim: Bytewise Approximate Matching: The Good, The Bad, and The Unknown. In: Journal of Digital Forensics, Security and Law, 11 (2), pp. 59–78, 2016. (Type: Journal Article | Abstract | Links | BibTeX) @article{HBB16, title = {Bytewise Approximate Matching: The Good, The Bad, and The Unknown}, author = {Vikram S. Harichandran AND Frank Breitinger AND Ibrahim Baggili}, doi = {10.15394/jdfsl.2016.1379}, year = {2016}, date = {2016-12-26}, journal = {Journal of Digital Forensics, Security and Law}, volume = {11}, number = {2}, pages = {59--78}, abstract = {Hash functions are established and well-known in digital forensics, where they are commonly used for proving integrity and file identification (i.e., hash all files on a seized device and compare the fingerprints against a reference database). However, with respect to the latter operation, an active adversary can easily overcome this approach because traditional hashes are designed to be sensitive to altering an input; output will significantly change if a single bit is flipped. Therefore, researchers developed approximate matching, which is a rather new, less prominent area but was conceived as a more robust counterpart to traditional hashing. Since the conception of approximate matching, the community has constructed numerous algorithms, extensions, and additional applications for this technology, and are still working on novel concepts to improve the status quo. In this survey article, we conduct a high-level review of the existing literature from a non-technical perspective and summarize the existing body of knowledge in approximate matching, with special focus on bytewise algorithms. Our contribution allows researchers and practitioners to receive an overview of the state of the art of approximate matching so that they may understand the capabilities and challenges of the field. Simply, we present the terminology, use cases, classification, requirements, testing methods, algorithms, applications, and a list of primary and secondary literature.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Hash functions are established and well-known in digital forensics, where they are commonly used for proving integrity and file identification (i.e., hash all files on a seized device and compare the fingerprints against a reference database). However, with respect to the latter operation, an active adversary can easily overcome this approach because traditional hashes are designed to be sensitive to altering an input; output will significantly change if a single bit is flipped. Therefore, researchers developed approximate matching, which is a rather new, less prominent area but was conceived as a more robust counterpart to traditional hashing. Since the conception of approximate matching, the community has constructed numerous algorithms, extensions, and additional applications for this technology, and are still working on novel concepts to improve the status quo. In this survey article, we conduct a high-level review of the existing literature from a non-technical perspective and summarize the existing body of knowledge in approximate matching, with special focus on bytewise algorithms. Our contribution allows researchers and practitioners to receive an overview of the state of the art of approximate matching so that they may understand the capabilities and challenges of the field. Simply, we present the terminology, use cases, classification, requirements, testing methods, algorithms, applications, and a list of primary and secondary literature. |
33. | Ricci, Joseph; Baggili, Ibrahim; Breitinger, Frank: Watch What You Wear: Smartwatches and Sluggish Security. In: Marrington, Andrew; Kerr, Don; Gammack, John (Ed.): Managing Security Issues and the Hidden Dangers of Wearable Technologies, pp. 47, IGI Global, 2016. (Type: Incollection | Abstract | Links | BibTeX) @incollection{RBB16, title = {Watch What You Wear: Smartwatches and Sluggish Security}, author = {Joseph Ricci AND Ibrahim Baggili AND Frank Breitinger}, editor = {Andrew Marrington AND Don Kerr AND John Gammack}, doi = {10.4018/978-1-5225-1016-1.ch003}, year = {2016}, date = {2016-09-01}, booktitle = {Managing Security Issues and the Hidden Dangers of Wearable Technologies}, journal = {Managing Security Issues and the Hidden Dangers of Wearable Technologies}, pages = {47}, publisher = {IGI Global}, abstract = {There is no doubt that the form factor of devices continues to shrink as evidenced by smartphones and most recently smartwatches. The adoption rate of small computing devices is staggering and needs stronger attention from the cybersecurity and digital forensics communities. In this chapter, we dissect smartwatches. We first present a historical roadmap of smartwatches. We then explore the smartwatch marketplace and outline existing smartwatch hardware, operating systems and software. Next we elaborate on the uses of smartwatches and then discuss the security and forensic implications of smartwatches by reviewing the relevant literature. Lastly, we outline future research directions in smartwatch security and forensics.}, keywords = {}, pubstate = {published}, tppubtype = {incollection} } There is no doubt that the form factor of devices continues to shrink as evidenced by smartphones and most recently smartwatches. The adoption rate of small computing devices is staggering and needs stronger attention from the cybersecurity and digital forensics communities. In this chapter, we dissect smartwatches. We first present a historical roadmap of smartwatches. We then explore the smartwatch marketplace and outline existing smartwatch hardware, operating systems and software. Next we elaborate on the uses of smartwatches and then discuss the security and forensic implications of smartwatches by reviewing the relevant literature. Lastly, we outline future research directions in smartwatch security and forensics. |
34. | Meffert, Christopher S; Baggili, Ibrahim; Breitinger, Frank: Deleting collected digital evidence by exploiting a widely adopted hardware write blocker. In: Digital Investigation, 18 , pp. 87–96, 2016, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{MBB16, title = {Deleting collected digital evidence by exploiting a widely adopted hardware write blocker}, author = {Christopher S. Meffert and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287616300354}, doi = {10.1016/j.diin.2016.04.004}, issn = {1742-2876}, year = {2016}, date = {2016-08-07}, journal = {Digital Investigation}, volume = {18}, pages = {87--96}, abstract = {In this primary work we call for the importance of integrating security testing into the process of testing digital forensic tools. We postulate that digital forensic tools are increasing in features (such as network imaging), becoming networkable, and are being proposed as forensic cloud services. This raises the need for testing the security of these tools, especially since digital evidence integrity is of paramount importance. At the time of conducting this work, little to no published anti-forensic research had focused on attacks against the forensic tools/process. We used the TD3, a popular, validated, touch screen disk duplicator and hardware write blocker with networking capabilities and designed an attack that corrupted the integrity of the destination drive (drive with the duplicated evidence) without the user's knowledge. By also modifying and repackaging the firmware update, we illustrated that a potential adversary is capable of leveraging a phishing attack scenario in order to fake digital forensic practitioners into updating the device with a malicious operating system. The same attack scenario may also be practiced by a disgruntled insider. The results also raise the question of whether security standards should be drafted and adopted by digital forensic tool makers.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this primary work we call for the importance of integrating security testing into the process of testing digital forensic tools. We postulate that digital forensic tools are increasing in features (such as network imaging), becoming networkable, and are being proposed as forensic cloud services. This raises the need for testing the security of these tools, especially since digital evidence integrity is of paramount importance. At the time of conducting this work, little to no published anti-forensic research had focused on attacks against the forensic tools/process. We used the TD3, a popular, validated, touch screen disk duplicator and hardware write blocker with networking capabilities and designed an attack that corrupted the integrity of the destination drive (drive with the duplicated evidence) without the user's knowledge. By also modifying and repackaging the firmware update, we illustrated that a potential adversary is capable of leveraging a phishing attack scenario in order to fake digital forensic practitioners into updating the device with a malicious operating system. The same attack scenario may also be practiced by a disgruntled insider. The results also raise the question of whether security standards should be drafted and adopted by digital forensic tool makers. |
35. | Harichandran, Vikram S; Walnycky, Daniel; Baggili, Ibrahim; Breitinger, Frank: CuFA: A more formal definition for digital forensic artifacts. In: Digital Investigation, 18 , pp. 125–137, 2016, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{HWB16, title = {CuFA: A more formal definition for digital forensic artifacts}, author = {Vikram S. Harichandran and Daniel Walnycky and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287616300366}, doi = {10.1016/j.diin.2016.04.005}, issn = {1742-2876}, year = {2016}, date = {2016-08-07}, journal = {Digital Investigation}, volume = {18}, pages = {125--137}, abstract = {The term ``artifact'' currently does not have a formal definition within the domain of cyber/digital forensics, resulting in a lack of standardized reporting, linguistic understanding between professionals, and efficiency. In this paper we propose a new definition based on a survey we conducted, literature usage, prior definitions of the word itself, and similarities with archival science. This definition includes required fields that all artifacts must have and encompasses the notion of curation. Thus, we propose using a new term -- curated forensic artifact (CuFA) -- to address items which have been cleared for entry into a CuFA database (one implementation, the Artifact Genome Project, abbreviated as AGP, is under development and briefly outlined). An ontological model encapsulates these required fields while utilizing a lower-level taxonomic schema. We use the Cyber Observable eXpression (CybOX) project due to its rising popularity and rigorous classifications of forensic objects. Additionally, we suggest some improvements on its integration into our model and identify higher-level location categories to illustrate tracing an object from creation through investigative leads. Finally, a step-wise procedure for researching and logging CuFAs is devised to accompany the model.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The term ``artifact'' currently does not have a formal definition within the domain of cyber/digital forensics, resulting in a lack of standardized reporting, linguistic understanding between professionals, and efficiency. In this paper we propose a new definition based on a survey we conducted, literature usage, prior definitions of the word itself, and similarities with archival science. This definition includes required fields that all artifacts must have and encompasses the notion of curation. Thus, we propose using a new term -- curated forensic artifact (CuFA) -- to address items which have been cleared for entry into a CuFA database (one implementation, the Artifact Genome Project, abbreviated as AGP, is under development and briefly outlined). An ontological model encapsulates these required fields while utilizing a lower-level taxonomic schema. We use the Cyber Observable eXpression (CybOX) project due to its rising popularity and rigorous classifications of forensic objects. Additionally, we suggest some improvements on its integration into our model and identify higher-level location categories to illustrate tracing an object from creation through investigative leads. Finally, a step-wise procedure for researching and logging CuFAs is devised to accompany the model. |
36. | Conlan, Kevin; Baggili, Ibrahim; Breitinger, Frank: Anti-forensics: Furthering digital forensic science through a new extended, granular taxonomy. In: Digital Investigation, 18 , pp. 66–75, 2016, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{CBB16, title = {Anti-forensics: Furthering digital forensic science through a new extended, granular taxonomy}, author = {Kevin Conlan and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287616300378}, doi = {10.1016/j.diin.2016.04.006}, issn = {1742-2876}, year = {2016}, date = {2016-08-07}, journal = {Digital Investigation}, volume = {18}, pages = {66--75}, abstract = {Anti-forensic tools, techniques and methods are becoming a formidable obstacle for the digital forensic community. Thus, new research initiatives and strategies must be formulated to address this growing problem. In this work we first collect and categorize 308 anti-digital forensic tools to survey the field. We then devise an extended anti-forensic taxonomy to the one proposed by Rogers (2006) in order to create a more comprehensive taxonomy and facilitate linguistic standardization. Our work also takes into consideration anti-forensic activity which utilizes tools that were not originally designed for anti-forensic purposes, but can still be used with malicious intent. This category was labeled as Possible indications of anti-forensic activity, as certain software, scenarios, and digital artifacts could indicate anti-forensic activity on a system. We also publicly share our data sets, which includes categorical data on 308 collected anti-forensic tools, as well as 2780 unique hash values related to the installation files of 191 publicly available anti-forensic tools. As part of our analysis, the collected hash set was ran against the National Institute of Standards and Technology's 2016 National Software Reference Library, and only 423 matches were found out of the 2780 hashes. Our findings indicate a need for future endeavors in creating and maintaining exhaustive anti-forensic hash data sets.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Anti-forensic tools, techniques and methods are becoming a formidable obstacle for the digital forensic community. Thus, new research initiatives and strategies must be formulated to address this growing problem. In this work we first collect and categorize 308 anti-digital forensic tools to survey the field. We then devise an extended anti-forensic taxonomy to the one proposed by Rogers (2006) in order to create a more comprehensive taxonomy and facilitate linguistic standardization. Our work also takes into consideration anti-forensic activity which utilizes tools that were not originally designed for anti-forensic purposes, but can still be used with malicious intent. This category was labeled as Possible indications of anti-forensic activity, as certain software, scenarios, and digital artifacts could indicate anti-forensic activity on a system. We also publicly share our data sets, which includes categorical data on 308 collected anti-forensic tools, as well as 2780 unique hash values related to the installation files of 191 publicly available anti-forensic tools. As part of our analysis, the collected hash set was ran against the National Institute of Standards and Technology's 2016 National Software Reference Library, and only 423 matches were found out of the 2780 hashes. Our findings indicate a need for future endeavors in creating and maintaining exhaustive anti-forensic hash data sets. |
37. | Zhang, Xiaolu; Breitinger, Frank; Baggili, Ibrahim: Rapid Android Parser for Investigating DEX files (RAPID). In: Digital Investigation, 17 , pp. 28–39, 2016, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{ZBB16, title = {Rapid Android Parser for Investigating DEX files (RAPID)}, author = {Xiaolu Zhang and Frank Breitinger and Ibrahim Baggili}, url = {http://www.sciencedirect.com/science/article/pii/S1742287616300305}, doi = {10.1016/j.diin.2016.03.002}, issn = {1742-2876}, year = {2016}, date = {2016-03-25}, journal = {Digital Investigation}, volume = {17}, pages = {28--39}, abstract = {Abstract Android malware is a well-known challenging problem and many researchers/vendors/practitioners have tried to address this issue through application analysis techniques. In order to analyze Android applications, tools decompress APK files and extract relevant data from the Dalvik EXecutable (DEX) files. To acquire the data, investigators either use decompiled intermediate code generated by existing tools, e.g., Baksmali or Dex2jar or write their own parsers/dissemblers. Thus, they either need additional time because of decompiling the application into an intermediate representation and then parsing text files, or they reinvent the wheel by implementing their own parsers. In this article, we present Rapid Android Parser for Investigating DEX files (RAPID) which is an open source and easy-to-use JAVA library for parsing DEX files. RAPID comes with well-documented APIs which allow users to query data directly from the DEX binary files. Our experiments reveal that RAPID outperforms existing approaches in terms of runtime efficiency, provides better reliability (does not crash) and can support dynamic analysis by finding critical offsets. Notably, the processing time for our sample set of 22.35 GB was only 1.5 h with RAPID while the traditional approaches needed about 23 h (parsing and querying).}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract Android malware is a well-known challenging problem and many researchers/vendors/practitioners have tried to address this issue through application analysis techniques. In order to analyze Android applications, tools decompress APK files and extract relevant data from the Dalvik EXecutable (DEX) files. To acquire the data, investigators either use decompiled intermediate code generated by existing tools, e.g., Baksmali or Dex2jar or write their own parsers/dissemblers. Thus, they either need additional time because of decompiling the application into an intermediate representation and then parsing text files, or they reinvent the wheel by implementing their own parsers. In this article, we present Rapid Android Parser for Investigating DEX files (RAPID) which is an open source and easy-to-use JAVA library for parsing DEX files. RAPID comes with well-documented APIs which allow users to query data directly from the DEX binary files. Our experiments reveal that RAPID outperforms existing approaches in terms of runtime efficiency, provides better reliability (does not crash) and can support dynamic analysis by finding critical offsets. Notably, the processing time for our sample set of 22.35 GB was only 1.5 h with RAPID while the traditional approaches needed about 23 h (parsing and querying). |
38. | Gupta, Vikas; Breitinger, Frank: How Cuckoo Filter Can Improve Existing Approximate Matching Techniques. In: James, Joshua I; Breitinger, Frank (Ed.): Digital Forensics and Cyber Crime, pp. 39-52, Springer International Publishing, 2015, ISBN: 978-3-319-25511-8, (bf Best Paper Award). (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{GB15, title = {How Cuckoo Filter Can Improve Existing Approximate Matching Techniques}, author = {Vikas Gupta and Frank Breitinger}, editor = {James, Joshua I. and Breitinger, Frank}, url = {http://dx.doi.org/10.1007/978-3-319-25512-5_4}, doi = {10.1007/978-3-319-25512-5_4}, isbn = {978-3-319-25511-8}, year = {2015}, date = {2015-12-25}, booktitle = {Digital Forensics and Cyber Crime}, volume = {157}, pages = {39-52}, publisher = {Springer International Publishing}, series = {Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering}, abstract = {In recent years, approximate matching algorithms have become an important component in digital forensic research and have been adopted in some other working areas as well. Currently there are several approaches but especially sdhash and mrsh-v2 attract the attention of the community because of their good overall performance (runtime, compression and detection rates). Although both approaches have a quite different proceeding, their final output (the similarity digest) is very similar as both utilize Bloom filters. This data structure was presented in 1970 and thus has been around for a while. Recently, a new data structure was proposed and claimed to be faster and have a smaller memory footprint than Bloom filter -- Cuckoo filter. In this paper we analyze the feasibility of Cuckoo filter for approximate matching algorithms and present a prototype implementation called mrsh-cf which is based on a special version of mrsh-v2 called mrsh-net. We demonstrate that by using Cuckoo filter there is a runtime improvement of approximately 37% and also a significantly better false positive rate. The memory footprint of mrsh-cf is 8 times smaller than mrsh-net, while the compression rate is twice than Bloom filter based fingerprint.}, note = {bf Best Paper Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In recent years, approximate matching algorithms have become an important component in digital forensic research and have been adopted in some other working areas as well. Currently there are several approaches but especially sdhash and mrsh-v2 attract the attention of the community because of their good overall performance (runtime, compression and detection rates). Although both approaches have a quite different proceeding, their final output (the similarity digest) is very similar as both utilize Bloom filters. This data structure was presented in 1970 and thus has been around for a while. Recently, a new data structure was proposed and claimed to be faster and have a smaller memory footprint than Bloom filter -- Cuckoo filter. In this paper we analyze the feasibility of Cuckoo filter for approximate matching algorithms and present a prototype implementation called mrsh-cf which is based on a special version of mrsh-v2 called mrsh-net. We demonstrate that by using Cuckoo filter there is a runtime improvement of approximately 37% and also a significantly better false positive rate. The memory footprint of mrsh-cf is 8 times smaller than mrsh-net, while the compression rate is twice than Bloom filter based fingerprint. |
39. | Harichandran, Vikram S; Breitinger, Frank; Baggili, Ibrahim; Marrington, Andrew: A cyber forensics needs analysis survey: Revisiting the domain's needs a decade later. In: Computers & Security, 57 , pp. 1–13, 2015, ISSN: 0167-4048. (Type: Journal Article | Abstract | Links | BibTeX) @article{HBB15, title = {A cyber forensics needs analysis survey: Revisiting the domain's needs a decade later}, author = {Vikram S. Harichandran and Frank Breitinger and Ibrahim Baggili and Andrew Marrington}, url = {http://www.sciencedirect.com/science/article/pii/S0167404815001595}, doi = {10.1016/j.cose.2015.10.007}, issn = {0167-4048}, year = {2015}, date = {2015-11-10}, journal = {Computers & Security}, volume = {57}, pages = {1--13}, abstract = {Abstract The number of successful cyber attacks continues to increase, threatening financial and personal security worldwide. Cyber/digital forensics is undergoing a paradigm shift in which evidence is frequently massive in size, demands live acquisition, and may be insufficient to convict a criminal residing in another legal jurisdiction. This paper presents the findings of the first broad needs analysis survey in cyber forensics in nearly a decade, aimed at obtaining an updated consensus of professional attitudes in order to optimize resource allocation and to prioritize problems and possible solutions more efficiently. Results from the 99 respondents gave compelling testimony that the following will be necessary in the future: (1) better education/training/certification (opportunities, standardization, and skill-sets); (2) support for cloud and mobile forensics; (3) backing for and improvement of open-source tools (4) research on encryption, malware, and trail obfuscation; (5) revised laws (specific, up-to-date, and which protect user privacy); (6) better communication, especially between/with law enforcement (including establishing new frameworks to mitigate problematic communication); (7) more personnel and funding.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract The number of successful cyber attacks continues to increase, threatening financial and personal security worldwide. Cyber/digital forensics is undergoing a paradigm shift in which evidence is frequently massive in size, demands live acquisition, and may be insufficient to convict a criminal residing in another legal jurisdiction. This paper presents the findings of the first broad needs analysis survey in cyber forensics in nearly a decade, aimed at obtaining an updated consensus of professional attitudes in order to optimize resource allocation and to prioritize problems and possible solutions more efficiently. Results from the 99 respondents gave compelling testimony that the following will be necessary in the future: (1) better education/training/certification (opportunities, standardization, and skill-sets); (2) support for cloud and mobile forensics; (3) backing for and improvement of open-source tools (4) research on encryption, malware, and trail obfuscation; (5) revised laws (specific, up-to-date, and which protect user privacy); (6) better communication, especially between/with law enforcement (including establishing new frameworks to mitigate problematic communication); (7) more personnel and funding. |
40. | Karpisek, Filip; Baggili, Ibrahim; Breitinger, Frank: WhatsApp network forensics: Decrypting and understanding the WhatsApp call signaling messages. In: Digital Investigation, 15 , pp. 110–118, 2015, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{KBB15, title = {WhatsApp network forensics: Decrypting and understanding the WhatsApp call signaling messages}, author = {Filip Karpisek and Ibrahim Baggili and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287615000985}, doi = {10.1016/j.diin.2015.09.002}, issn = {1742-2876}, year = {2015}, date = {2015-10-10}, journal = {Digital Investigation}, volume = {15}, pages = {110--118}, abstract = {Abstract WhatsApp is a widely adopted mobile messaging application with over 800 million users. Recently, a calling feature was added to the application and no comprehensive digital forensic analysis has been performed with regards to this feature at the time of writing this paper. In this work, we describe how we were able to decrypt the network traffic and obtain forensic artifacts that relate to this new calling feature which included the: a) WhatsApp phone numbers, b) WhatsApp server IPs, c) WhatsApp audio codec (Opus), d) WhatsApp call duration, and e) WhatsApp's call termination. We explain the methods and tools used to decrypt the traffic as well as thoroughly elaborate on our findings with respect to the WhatsApp signaling messages. Furthermore, we also provide the community with a tool that helps in the visualization of the WhatsApp protocol messages.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract WhatsApp is a widely adopted mobile messaging application with over 800 million users. Recently, a calling feature was added to the application and no comprehensive digital forensic analysis has been performed with regards to this feature at the time of writing this paper. In this work, we describe how we were able to decrypt the network traffic and obtain forensic artifacts that relate to this new calling feature which included the: a) WhatsApp phone numbers, b) WhatsApp server IPs, c) WhatsApp audio codec (Opus), d) WhatsApp call duration, and e) WhatsApp's call termination. We explain the methods and tools used to decrypt the traffic as well as thoroughly elaborate on our findings with respect to the WhatsApp signaling messages. Furthermore, we also provide the community with a tool that helps in the visualization of the WhatsApp protocol messages. |
41. | James, Joshua I; Breitinger, Frank (Ed.): Digital Forensics and Cyber Crime: 7th International Conference, ICDF2C 2015, Seoul, South Korea, October 6-8, 2015, Revised Selected Papers. Springer, 2015, ISBN: 978-3-319-25511-8. (Type: Book | Links | BibTeX) @book{JB15, title = {Digital Forensics and Cyber Crime: 7th International Conference, ICDF2C 2015, Seoul, South Korea, October 6-8, 2015, Revised Selected Papers}, editor = {Joshua I. James and Frank Breitinger}, url = {http://dx.doi.org/10.1007/978-3-319-25512-5}, doi = {10.1007/978-3-319-25512-5}, isbn = {978-3-319-25511-8}, year = {2015}, date = {2015-10-08}, volume = {157}, publisher = {Springer}, series = {Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering}, keywords = {}, pubstate = {published}, tppubtype = {book} } |
42. | Baggili, Ibrahim; Oduru, Jeff; Anthony, Kyle; Breitinger, Frank; McGee, Glenn: Watch What You Wear: Preliminary Forensic Analysis of Smart Watches. In: Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 303-311, 2015. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{BOA15, title = {Watch What You Wear: Preliminary Forensic Analysis of Smart Watches}, author = {Ibrahim Baggili and Jeff Oduru and Kyle Anthony and Frank Breitinger and Glenn McGee}, doi = {10.1109/ARES.2015.39}, year = {2015}, date = {2015-08-27}, booktitle = {Availability, Reliability and Security (ARES), 2015 10th International Conference on}, pages = {303-311}, abstract = {This work presents preliminary forensic analysis of two popular smart watches, the Samsung Gear 2 Neo and LG G. These wearable computing devices have the form factor of watches and sync with smart phones to display notifications, track footsteps and record voice messages. We posit that as smart watches are adopted by more users, the potential for them becoming a haven for digital evidence will increase thus providing utility for this preliminary work. In our work, we examined the forensic artifacts that are left on a Samsung Galaxy S4 Active phone that was used to sync with the Samsung Gear 2 Neo watch and the LG G watch. We further outline a methodology for physically acquiring data from the watches after gaining root access to them. Our results show that we can recover a swath of digital evidence directly form the watches when compared to the data on the phone that is synced with the watches. Furthermore, to root the LG G watch, the watch has to be reset to its factory settings which is alarming because the process may delete data of forensic relevance. Although this method is forensically intrusive, it may be used for acquiring data from already rooted LG watches. It is our observation that the data at the core of the functionality of at least the two tested smart watches, messages, health and fitness data, e-mails, contacts, events and notifications are accessible directly from the acquired images of the watches, which affirms our claim that the forensic value of evidence from smart watches is worthy of further study and should be investigated both at a high level and with greater specificity and granularity.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This work presents preliminary forensic analysis of two popular smart watches, the Samsung Gear 2 Neo and LG G. These wearable computing devices have the form factor of watches and sync with smart phones to display notifications, track footsteps and record voice messages. We posit that as smart watches are adopted by more users, the potential for them becoming a haven for digital evidence will increase thus providing utility for this preliminary work. In our work, we examined the forensic artifacts that are left on a Samsung Galaxy S4 Active phone that was used to sync with the Samsung Gear 2 Neo watch and the LG G watch. We further outline a methodology for physically acquiring data from the watches after gaining root access to them. Our results show that we can recover a swath of digital evidence directly form the watches when compared to the data on the phone that is synced with the watches. Furthermore, to root the LG G watch, the watch has to be reset to its factory settings which is alarming because the process may delete data of forensic relevance. Although this method is forensically intrusive, it may be used for acquiring data from already rooted LG watches. It is our observation that the data at the core of the functionality of at least the two tested smart watches, messages, health and fitness data, e-mails, contacts, events and notifications are accessible directly from the acquired images of the watches, which affirms our claim that the forensic value of evidence from smart watches is worthy of further study and should be investigated both at a high level and with greater specificity and granularity. |
43. | Walnycky, Daniel; Baggili, Ibrahim; Marrington, Andrew; Moore, Jason; Breitinger, Frank: Network and device forensic analysis of Android social-messaging applications. In: Digital Investigation, 14, Supplement 1 , pp. 77–84, 2015, ISSN: 1742-2876, (The Proceedings of the Fifteenth Annual DFRWS Conference). (Type: Journal Article | Abstract | Links | BibTeX) @article{WBM15, title = {Network and device forensic analysis of Android social-messaging applications}, author = {Daniel Walnycky and Ibrahim Baggili and Andrew Marrington and Jason Moore and Frank Breitinger}, url = {http://www.sciencedirect.com/science/article/pii/S1742287615000547}, doi = {10.1016/j.diin.2015.05.009}, issn = {1742-2876}, year = {2015}, date = {2015-08-09}, journal = {Digital Investigation}, volume = {14, Supplement 1}, pages = {77--84}, abstract = {Abstract In this research we forensically acquire and analyze the device-stored data and network traffic of 20 popular instant messaging applications for Android. We were able to reconstruct some or the entire message content from 16 of the 20 applications tested, which reflects poorly on the security and privacy measures employed by these applications but may be construed positively for evidence collection purposes by digital forensic practitioners. This work shows which features of these instant messaging applications leave evidentiary traces allowing for suspect data to be reconstructed or partially reconstructed, and whether network forensics or device forensics permits the reconstruction of that activity. We show that in most cases we were able to reconstruct or intercept data such as: passwords, screenshots taken by applications, pictures, videos, audio sent, messages sent, sketches, profile pictures and more.}, note = {The Proceedings of the Fifteenth Annual DFRWS Conference}, keywords = {}, pubstate = {published}, tppubtype = {article} } Abstract In this research we forensically acquire and analyze the device-stored data and network traffic of 20 popular instant messaging applications for Android. We were able to reconstruct some or the entire message content from 16 of the 20 applications tested, which reflects poorly on the security and privacy measures employed by these applications but may be construed positively for evidence collection purposes by digital forensic practitioners. This work shows which features of these instant messaging applications leave evidentiary traces allowing for suspect data to be reconstructed or partially reconstructed, and whether network forensics or device forensics permits the reconstruction of that activity. We show that in most cases we were able to reconstruct or intercept data such as: passwords, screenshots taken by applications, pictures, videos, audio sent, messages sent, sketches, profile pictures and more. |
44. | Rathgeb, Christian; Breitinger, Frank; Baier, Harald; Busch, Christoph: Towards Bloom filter-based indexing of iris biometric data. In: Biometrics (ICB), 2015 International Conference on, pp. 422–429, IEEE 2015, (bf Siew-Sngiem Best Poster Award). (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{7139105, title = {Towards Bloom filter-based indexing of iris biometric data}, author = {Christian Rathgeb and Frank Breitinger and Harald Baier and Christoph Busch}, doi = {10.1109/ICB.2015.7139105}, year = {2015}, date = {2015-05-22}, booktitle = {Biometrics (ICB), 2015 International Conference on}, pages = {422--429}, organization = {IEEE}, abstract = {Conventional biometric identification systems require exhaustive 1 : N comparisons in order to identify a bio- metric probe, i.e. comparison time frequently dominates the overall computational workload. Biometric database indexing represents a challenging task since biometric data does not exhibit any natural sorting order. In this paper we present a preliminary study on the feasibility of applying Bloom filters for the purpose of iris biometric database indexing. It is shown that, by constructing a binary tree data structure of Bloom filters extracted from binary iris biometric templates (iris-codes), the search space can be reduced to O(log N ). In experiments, which are carried out on a medium-sized database of N = 256 subjects, biometric performance (accuracy) is maintained for different conventional identification systems. Further, perspectives on how to employ the proposed scheme on large-scale databases are given.}, note = {bf Siew-Sngiem Best Poster Award}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Conventional biometric identification systems require exhaustive 1 : N comparisons in order to identify a bio- metric probe, i.e. comparison time frequently dominates the overall computational workload. Biometric database indexing represents a challenging task since biometric data does not exhibit any natural sorting order. In this paper we present a preliminary study on the feasibility of applying Bloom filters for the purpose of iris biometric database indexing. It is shown that, by constructing a binary tree data structure of Bloom filters extracted from binary iris biometric templates (iris-codes), the search space can be reduced to O(log N ). In experiments, which are carried out on a medium-sized database of N = 256 subjects, biometric performance (accuracy) is maintained for different conventional identification systems. Further, perspectives on how to employ the proposed scheme on large-scale databases are given. |
45. | Satyendra, Gurjar; Baggili, Ibrahim; Breitinger, Frank; Fischer, Alice: An empirical comparison of widely adopted hash functions in digital forensics: does the programming language and operating system make a difference?. In: Proceedings of the Conference on Digital Forensics, Security and Law, pp. 57–68, 2015. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{SBBF15, title = {An empirical comparison of widely adopted hash functions in digital forensics: does the programming language and operating system make a difference?}, author = {Gurjar Satyendra and Ibrahim Baggili and Frank Breitinger and Alice Fischer}, url = {https://commons.erau.edu/adfsl/2015/tuesday/6/}, year = {2015}, date = {2015-05-19}, booktitle = {Proceedings of the Conference on Digital Forensics, Security and Law}, pages = {57--68}, abstract = {Hash functions are widespread in computer sciences and have a wide range of applications such as ensuring integrity in cryptographic protocols, structuring database entries (hash tables) or identifying known files in forensic investigations. Besides their cryptographic requirements, a fundamental property of hash functions is efficient and easy computation which is especially important in digital forensics due to the large amount of data that need to be processed in cases. In this paper, we correlate the runtime efficiency of common hashing algorithms (MD5, SHA-family) and their implementation. Our empirical comparison focuses on C-OpenSSL, Python, Ruby, Java on Windows and Linux and C and WinCrypto API on Windows. The purpose of this paper is to recommend appropriate programming languages and libraries for coding tools that include intensive hashing functionality. In each programming language, we compute the MD5, SHA-1, SHA-256 and SHA-512 digest on datasets from 2 MB to 1 GB. For each language, algorithm and data, we perform multiple runs and compute the average elapsed time. In our experiment, we observed that OpenSSL and languages utilizing OpenSSL (Python and Ruby) perform better across all the hashing algorithms and data sizes on Windows and Linux. However, on Windows, performance of Java (Oracle JDK) and C WinCrypto is comparable to OpenSSL and better for SHA-512.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Hash functions are widespread in computer sciences and have a wide range of applications such as ensuring integrity in cryptographic protocols, structuring database entries (hash tables) or identifying known files in forensic investigations. Besides their cryptographic requirements, a fundamental property of hash functions is efficient and easy computation which is especially important in digital forensics due to the large amount of data that need to be processed in cases. In this paper, we correlate the runtime efficiency of common hashing algorithms (MD5, SHA-family) and their implementation. Our empirical comparison focuses on C-OpenSSL, Python, Ruby, Java on Windows and Linux and C and WinCrypto API on Windows. The purpose of this paper is to recommend appropriate programming languages and libraries for coding tools that include intensive hashing functionality. In each programming language, we compute the MD5, SHA-1, SHA-256 and SHA-512 digest on datasets from 2 MB to 1 GB. For each language, algorithm and data, we perform multiple runs and compute the average elapsed time. In our experiment, we observed that OpenSSL and languages utilizing OpenSSL (Python and Ruby) perform better across all the hashing algorithms and data sizes on Windows and Linux. However, on Windows, performance of Java (Oracle JDK) and C WinCrypto is comparable to OpenSSL and better for SHA-512. |
46. | Baggili, Ibrahim; Breitinger, Frank: Data Sources for Advancing Cyber Forensics: What the Social World Has to Offer. In: AAAI Spring Symposium Series, 2015. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{BB15c, title = {Data Sources for Advancing Cyber Forensics: What the Social World Has to Offer}, author = {Ibrahim Baggili and Frank Breitinger}, url = {http://aaai.org/ocs/index.php/SSS/SSS15/paper/view/10227}, year = {2015}, date = {2015-03-12}, booktitle = {AAAI Spring Symposium Series}, abstract = {Cyber forensics is fairly new as a scientific discipline and deals with the acquisition, authentication and analysis of digital evidence. One of the biggest challenges in this domain has thus far been real data sources that are available for experimentation. Only a few data sources exist at the time writing of this paper. The authors in this paper deliberate how social media data sources may impact future directions in cyber forensics, and describe how these data sources may be used as new digital forensic artifacts in future investigations. The authors also deliberate how the scientific community may leverage publically accessible social media data to advance the state of the art in Cyber Forensics.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Cyber forensics is fairly new as a scientific discipline and deals with the acquisition, authentication and analysis of digital evidence. One of the biggest challenges in this domain has thus far been real data sources that are available for experimentation. Only a few data sources exist at the time writing of this paper. The authors in this paper deliberate how social media data sources may impact future directions in cyber forensics, and describe how these data sources may be used as new digital forensic artifacts in future investigations. The authors also deliberate how the scientific community may leverage publically accessible social media data to advance the state of the art in Cyber Forensics. |
47. | Breitinger, Frank; Liu, Huajian; Winter, Christian; Baier, Harald; Rybalchenko, Alexey; Steinebach, Martin: Towards a Process Model for Hash Functions in Digital Forensics. In: Gladyshev, Pavel; Marrington, Andrew; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 170-186, Springer International Publishing, 2014, ISBN: 978-3-319-14288-3. (Type: Inproceedings | Abstract | Links | BibTeX) @inproceedings{BLW14, title = {Towards a Process Model for Hash Functions in Digital Forensics}, author = {Frank Breitinger and Huajian Liu and Christian Winter and Harald Baier and Alexey Rybalchenko and Martin Steinebach}, editor = {Pavel Gladyshev and Andrew Marrington and Ibrahim Baggili}, url = {http://dx.doi.org/10.1007/978-3-319-14289-0_12}, doi = {10.1007/978-3-319-14289-0_12}, isbn = {978-3-319-14288-3}, year = {2014}, date = {2014-12-23}, booktitle = {Digital Forensics and Cyber Crime}, volume = {132}, pages = {170-186}, publisher = {Springer International Publishing}, series = {Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering}, abstract = {Handling forensic investigations gets more and more difficult as the amount of data one has to analyze is increasing continuously. A common approach for automated file identification are hash functions. The proceeding is quite simple: a tool hashes all files of a seized device and compares them against a database. Depending on the database, this allows to discard non-relevant (whitelisting) or detect suspicious files (blacklisting). One can distinguish three kinds of algorithms: (cryptographic) hash functions, bytewise approximate matching and semantic approximate matching (a.k.a perceptual hashing) where the main difference is the operation level. The latter one operates on the semantic level while both other approaches consider the byte-level. Hence, investigators have three different approaches at hand to analyze a device. First, this paper gives a comprehensive overview of existing approaches for bytewise and semantic approximate matching (for semantic we focus on images functions). Second, we compare implementations and summarize the strengths and weaknesses of all approaches. Third, we show how to integrate these functions based on a sample use case into one existing process model, the computer forensics field triage process model.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Handling forensic investigations gets more and more difficult as the amount of data one has to analyze is increasing continuously. A common approach for automated file identification are hash functions. The proceeding is quite simple: a tool hashes all files of a seized device and compares them against a database. Depending on the database, this allows to discard non-relevant (whitelisting) or detect suspicious files (blacklisting). One can distinguish three kinds of algorithms: (cryptographic) hash functions, bytewise approximate matching and semantic approximate matching (a.k.a perceptual hashing) where the main difference is the operation level. The latter one operates on the semantic level while both other approaches consider the byte-level. Hence, investigators have three different approaches at hand to analyze a device. First, this paper gives a comprehensive overview of existing approaches for bytewise and semantic approximate matching (for semantic we focus on images functions). Second, we compare implementations and summarize the strengths and weaknesses of all approaches. Third, we show how to integrate these functions based on a sample use case into one existing process model, the computer forensics field triage process model. |
48. | Rathgeb, Christian; Breitinger, Frank; Busch, Christoph; Baier, Harald: On application of bloom filters to iris biometrics. In: IET Biometrics, 3 (4), pp. 207-218, 2014, ISSN: 2047-4938. (Type: Journal Article | Abstract | Links | BibTeX) @article{RBBB14, title = {On application of bloom filters to iris biometrics}, author = {Christian Rathgeb and Frank Breitinger and Christoph Busch and Harald Baier}, doi = {10.1049/iet-bmt.2013.0049}, issn = {2047-4938}, year = {2014}, date = {2014-12-18}, journal = {IET Biometrics}, volume = {3}, number = {4}, pages = {207-218}, abstract = {In this study, the application of adaptive Bloom filters to binary iris biometric feature vectors, that is, iris-codes, is proposed. Bloom filters, which have been established as a powerful tool in various fields of computer science, are applied in order to transform iris-codes to a rotation-invariant feature representation. Properties of the proposed Bloom filter-based transform concurrently enable (i) biometric template protection, (ii) compression of biometric data and (iii) acceleration of biometric identification, whereas at the same time no significant degradation of biometric performance is observed. According to these fields of application, detailed investigations are presented. Experiments are conducted on the CASIA-v3 iris database for different feature extraction algorithms. Confirming the soundness of the proposed approach, the application of adaptive Bloom filters achieves rotation-invariant cancelable templates maintaining biometric performance, a compression of templates down to 20-40% of original size and a reduction of bit-comparisons to less than 5% leading to a substantial speed-up of the biometric system in identification mode.}, keywords = {}, pubstate = {published}, tppubtype = {article} } In this study, the application of adaptive Bloom filters to binary iris biometric feature vectors, that is, iris-codes, is proposed. Bloom filters, which have been established as a powerful tool in various fields of computer science, are applied in order to transform iris-codes to a rotation-invariant feature representation. Properties of the proposed Bloom filter-based transform concurrently enable (i) biometric template protection, (ii) compression of biometric data and (iii) acceleration of biometric identification, whereas at the same time no significant degradation of biometric performance is observed. According to these fields of application, detailed investigations are presented. Experiments are conducted on the CASIA-v3 iris database for different feature extraction algorithms. Confirming the soundness of the proposed approach, the application of adaptive Bloom filters achieves rotation-invariant cancelable templates maintaining biometric performance, a compression of templates down to 20-40% of original size and a reduction of bit-comparisons to less than 5% leading to a substantial speed-up of the biometric system in identification mode. |
49. | Breitinger, Frank; Rathgeb, Christian; Baier, Harald: An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach. In: Journal of Digital Forensics, Security and Law (JDFSL), 9 (2), pp. 155–166, 2014. (Type: Journal Article | Abstract | Links | BibTeX) @article{BRB14, title = {An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach}, author = {Frank Breitinger and Christian Rathgeb and Harald Baier}, url = {https://doi.org/10.15394/jdfsl.2014.1178}, doi = {doi.org/10.15394/jdfsl.2014.1178}, year = {2014}, date = {2014-09-01}, journal = {Journal of Digital Forensics, Security and Law (JDFSL)}, volume = {9}, number = {2}, pages = {155--166}, abstract = {Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n). This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n) to O(log(n)). Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n). This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n) to O(log(n)). Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs. |
50. | Breitinger, Frank; Stivaktakis, Georgios; Baier, Harald: FRASH: A Framework to Test Algorithms of Similarity Hashing. In: Digit. Investig., 10 , pp. S50–S58, 2014, ISSN: 1742-2876. (Type: Journal Article | Abstract | Links | BibTeX) @article{BSB13, title = {FRASH: A Framework to Test Algorithms of Similarity Hashing}, author = {Frank Breitinger and Georgios Stivaktakis and Harald Baier}, url = {http://dx.doi.org/10.1016/j.diin.2013.06.006}, doi = {10.1016/j.diin.2013.06.006}, issn = {1742-2876}, year = {2014}, date = {2014-08-03}, journal = {Digit. Investig.}, volume = {10}, pages = {S50--S58}, publisher = {Elsevier Science Publishers B. V.}, address = {Amsterdam, The Netherlands, The Netherlands}, abstract = {Automated input identification is a very challenging, but also important task. Within computer forensics this reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is necessary to cope with similar inputs (e.g., different versions of a file), embedded objects (e.g., a JPG within a Word document), and fragments (e.g., network packets), too. Over the recent years a couple of different similarity hashing algorithms were published. However, due to the absence of a definition and a test framework, it is hardly possible to evaluate and compare these approaches to establish them in the community. The paper at hand aims at providing an assessment methodology and a sample implementation called FRASH: a framework to test algorithms of similarity hashing. First, we describe common use cases of a similarity hashing algorithm to motivate our two test classes efficiency and sensitivity & robustness. Next, our open and freely available framework is briefly described. Finally, we apply FRASH to the well-known similarity hashing approaches ssdeep and sdhash to show their strengths and weaknesses.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Automated input identification is a very challenging, but also important task. Within computer forensics this reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is necessary to cope with similar inputs (e.g., different versions of a file), embedded objects (e.g., a JPG within a Word document), and fragments (e.g., network packets), too. Over the recent years a couple of different similarity hashing algorithms were published. However, due to the absence of a definition and a test framework, it is hardly possible to evaluate and compare these approaches to establish them in the community. The paper at hand aims at providing an assessment methodology and a sample implementation called FRASH: a framework to test algorithms of similarity hashing. First, we describe common use cases of a similarity hashing algorithm to motivate our two test classes efficiency and sensitivity & robustness. Next, our open and freely available framework is briefly described. Finally, we apply FRASH to the well-known similarity hashing approaches ssdeep and sdhash to show their strengths and weaknesses. |
All publications by year
1. | Bringing order to approximate matching: Classification and attacks on similarity digest algorithms. In: Forensic Science International: Digital Investigation, pp. 301120, 2021, ISSN: 2666-2817. | :
2. | Netfox detective: A novel open-source network forensics analysis tool. In: Forensic Science International: Digital Investigation, 35 , pp. 301019, 2020, ISSN: 2666-2817. | :
3. | First year students' experience in a Cyber World course -- an evaluation. In: Education and Information Technologies, 2020, ISBN: 1573-7608. | :
4. | Artifacts for detecting timestamp manipulation in NTFS on Windows and their reliability. In: Forensic Science International: Digital Investigation, 32 , pp. 300920, 2020, ISSN: 2666-2817. | :
5. | AI Forensics: Did the Artificial Intelligence System Do It? Why?. In: arXiv preprint arXiv:2005.13635, 2020. | :
6. | Digital forensic tools: Recent advances and enhancing the status quo. In: Forensic Science International: Digital Investigation, 34 , pp. 300999, 2020, ISSN: 2666-2817. | :
7. | The impact of excluding common blocks for approximate matching. In: Computers & Security, 89 , pp. 101676, 2019, ISSN: 0167-4048. | :
8. | A survey on smartphone user's security choices, awareness and education. In: Computers & Security, 88 , pp. 101647, 2019, ISSN: 0167-4048. | :
9. | Understanding the effects of removing common blocks on Approximate Matching scores under different scenarios for digital forensic investigations. In: XIX Brazilian Symposium on information and computational systems security, Brazilian Computer Society (SBC) S~Apounds o Paulo-SP, Brazil 2019, (bf Best Paper Award). | :
10. | IoT Ignorance is Digital Forensics Research Bliss: A Survey to Understand IoT Forensics Definitions, Challenges and Future Research Directions. In: Proceedings of the 14th International Conference on Availability, Reliability and Security, pp. 46:1–46:15, ACM, Canterbury, CA, United Kingdom, 2019, ISBN: 978-1-4503-7164-3. | :
11. | "CyberWorld" as a Theme for a University-wide First-year Common Course. In: 2019 ASEE Annual Conference & Exposition (Presented at Cyber Technology), 2019. | :
12. | On efficiency of artifact lookup strategies in digital forensics. In: Digital Investigation, 28 , pp. S116 - S125, 2019, ISSN: 1742-2876. | :
13. | Blockchain-Based Distributed Cloud Storage Digital Forensics: Where's the Beef?. In: IEEE Security & is Privacy, 17 (1), pp. 34-42, 2019, ISSN: 1540-7993. | :
14. | Timeline2GUI: A Log2Timeline CSV parser and training scenarios. In: Digital Investigation, 28 , pp. 34 - 43, 2018, ISSN: 1742-2876. | :
15. | If I Had a Million Cryptos: Cryptowallet Application Analysis and a Trojan Proof-of-Concept. In: Breitinger, Frank ; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 45–65, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-05487-8, (bf Best Paper Award). | :
16. | AndroParse - An Android Feature Extraction Framework and Dataset. In: Breitinger, Frank ; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 66–88, Springer International Publishing, Cham, 2018, ISBN: 978-3-030-05487-8. | :
17. | Digital Forensics and Cyber Crime: 10th International EAI Conference, ICDF2C 2018, New Orleans, LA, USA, September 10--12, 2018, Proceedings. Springer International Publishing, 2018, ISBN: 978-3-030-05486-1. | :
18. | Digital Forensics in the Next Five Years. In: Proceedings of the 13th International Conference on Availability, Reliability and Security, pp. 46:1–46:14, ACM, Hamburg, Germany, 2018, ISBN: 978-1-4503-6448-5. | :
19. | Experience constructing the Artifact Genome Project (AGP): Managing the domain's knowledge one artifact at a time. In: Digital Investigation, 26 , pp. S47 - S58, 2018, ISSN: 1742-2876. | :
20. | Survey results on adults and cybersecurity education. In: Education and Information Technologies, pp. 1–19, 2018, ISSN: 1360-2357. | :
21. | mrsh-mem: Approximate Matching on Raw Memory Dumps. In: 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF), pp. 47-64, 2018. | :
22. | Expediting MRSH-v2 Approximate Matching with Hierarchical Bloom Filter Trees. In: Matoušek, Petr ; Schmiedecker, Martin (Ed.): Digital Forensics and Cyber Crime, pp. 144–157, Springer International Publishing, Cham, 2018, ISBN: 978-3-319-73697-6, (bf Best Paper Award). | :
23. | An Overview of the Usage of Default Passwords. In: Matoušek, Petr ; Schmiedecker, Martin (Ed.): Digital Forensics and Cyber Crime, pp. 195–203, Springer International Publishing, Cham, 2018, ISBN: 978-3-319-73697-6. | :
24. | Forensic State Acquisition from Internet of Things (FSAIoT): A General Framework and Practical Approach for IoT Forensics Through IoT Device State Acquisition. In: Proceedings of the 12th International Conference on Availability, Reliability and Security, pp. 56:1–56:11, ACM, Reggio Calabria, Italy, 2017, ISBN: 978-1-4503-5257-4. | :
25. | Availability of datasets for digital forensics -- And what is missing. In: Digital Investigation, 22, Supplement , pp. S94 - S105, 2017, ISSN: 1742-2876. | :
26. | Leveraging the SRTP protocol for over-the-network memory acquisition of a GE Fanuc Series 90-30. In: Digital Investigation, 22, Supplement , pp. S26 - S38, 2017, ISSN: 1742-2876. | :
27. | DROP (DRone Open source Parser) your drone: Forensic analysis of the DJI Phantom III. In: Digital Investigation, 22, Supplement , pp. S3 - S14, 2017, ISSN: 1742-2876. | :
28. | Breaking into the vault: privacy, security and forensic analysis of android vault applications. In: Computers & Security, 70 , pp. 516 - 531, 2017, ISSN: 0167-4048. | :
29. | Find Me If You Can: Mobile GPS Mapping Applications Forensics Analysis & SNAVP The Open Source, Modular, Extensible Parser. In: Journal of Digital Forensics, Security and Law (JDFSL), 12 (1), pp. 7, 2017. | :
30. | Towards Syntactic Approximate Matching-A Pre-Processing Experiment. In: The Journal of Digital Forensics, Security and Law: JDFSL, 11 (2), pp. 97–110, 2016. | :
31. | Exploring Deviant Hacker Networks (DHN) On Social Media Platforms. In: Journal of Digital Forensics, Security and Law, 11 (2), pp. 7–20, 2016. | :
32. | Bytewise Approximate Matching: The Good, The Bad, and The Unknown. In: Journal of Digital Forensics, Security and Law, 11 (2), pp. 59–78, 2016. | :
33. | Watch What You Wear: Smartwatches and Sluggish Security. In: Marrington, Andrew; Kerr, Don; Gammack, John (Ed.): Managing Security Issues and the Hidden Dangers of Wearable Technologies, pp. 47, IGI Global, 2016. | :
34. | Deleting collected digital evidence by exploiting a widely adopted hardware write blocker. In: Digital Investigation, 18 , pp. 87–96, 2016, ISSN: 1742-2876. | :
35. | CuFA: A more formal definition for digital forensic artifacts. In: Digital Investigation, 18 , pp. 125–137, 2016, ISSN: 1742-2876. | :
36. | Anti-forensics: Furthering digital forensic science through a new extended, granular taxonomy. In: Digital Investigation, 18 , pp. 66–75, 2016, ISSN: 1742-2876. | :
37. | Rapid Android Parser for Investigating DEX files (RAPID). In: Digital Investigation, 17 , pp. 28–39, 2016, ISSN: 1742-2876. | :
38. | How Cuckoo Filter Can Improve Existing Approximate Matching Techniques. In: James, Joshua I; Breitinger, Frank (Ed.): Digital Forensics and Cyber Crime, pp. 39-52, Springer International Publishing, 2015, ISBN: 978-3-319-25511-8, (bf Best Paper Award). | :
39. | A cyber forensics needs analysis survey: Revisiting the domain's needs a decade later. In: Computers & Security, 57 , pp. 1–13, 2015, ISSN: 0167-4048. | :
40. | WhatsApp network forensics: Decrypting and understanding the WhatsApp call signaling messages. In: Digital Investigation, 15 , pp. 110–118, 2015, ISSN: 1742-2876. | :
41. | Digital Forensics and Cyber Crime: 7th International Conference, ICDF2C 2015, Seoul, South Korea, October 6-8, 2015, Revised Selected Papers. Springer, 2015, ISBN: 978-3-319-25511-8. | :
42. | Watch What You Wear: Preliminary Forensic Analysis of Smart Watches. In: Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 303-311, 2015. | :
43. | Network and device forensic analysis of Android social-messaging applications. In: Digital Investigation, 14, Supplement 1 , pp. 77–84, 2015, ISSN: 1742-2876, (The Proceedings of the Fifteenth Annual DFRWS Conference). | :
44. | Towards Bloom filter-based indexing of iris biometric data. In: Biometrics (ICB), 2015 International Conference on, pp. 422–429, IEEE 2015, (bf Siew-Sngiem Best Poster Award). | :
45. | An empirical comparison of widely adopted hash functions in digital forensics: does the programming language and operating system make a difference?. In: Proceedings of the Conference on Digital Forensics, Security and Law, pp. 57–68, 2015. | :
46. | Data Sources for Advancing Cyber Forensics: What the Social World Has to Offer. In: AAAI Spring Symposium Series, 2015. | :
47. | Towards a Process Model for Hash Functions in Digital Forensics. In: Gladyshev, Pavel; Marrington, Andrew; Baggili, Ibrahim (Ed.): Digital Forensics and Cyber Crime, pp. 170-186, Springer International Publishing, 2014, ISBN: 978-3-319-14288-3. | :
48. | On application of bloom filters to iris biometrics. In: IET Biometrics, 3 (4), pp. 207-218, 2014, ISSN: 2047-4938. | :
49. | An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach. In: Journal of Digital Forensics, Security and Law (JDFSL), 9 (2), pp. 155–166, 2014. | :
50. | FRASH: A Framework to Test Algorithms of Similarity Hashing. In: Digit. Investig., 10 , pp. S50–S58, 2014, ISSN: 1742-2876. | :