Authors: Jens-Petter Sandvik, Katrin Franke, Habtamu Abie, André Årnes



Forensic investigations are often conducted under limited resource availability such as time, equipment, and people. As data acquisition is resource-demanding already, a higher emphasis needs to be put on prioritizing the investigative steps to optimize the probability of collecting the relevant evidence. Data volatility measures how quickly data disappears from a system and is an essential part of assessing the likelihood of collecting the most valuable evidence. An investigator can use a model for the volatility to estimate the probability of the existence of evidence. This work motivates and details a model for data volatility and exemplifies it for the Coffee File System used in Contiki OS, an operating system for IoT devices. We conducted experiments to test how well the model corresponds to the collected simulated data and cross-validate the model with observations from file system operations. The results revealed that an approximated model based on the known workings of the file system underestimated the volatility. While there are many sources describing volatility qualitatively, there is little research on quantitative volatility, and this paper is a stepping stone to understanding a quantitative approach to evidence volatility.