💾 Archived View for aphrack.org › issues › phrack59 › 6.gmi captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
==Phrack Inc.== Volume 0x0b, Issue 0x3b, Phile #0x06 of 0x12 |=--------------=[ Defeating Forensic Analysis on Unix ]=----------------=| |=-----------------------------------------------------------------------=| |=-------------=[ the grugq <grugq@anti-forensics.com> ]=----------------=| |=--------------------[ www.anti-forensics.com ]=------------------------=| --[ Contents 1 - Introduction 1.1 - Generic Unix File Systems 1.2 - Forensics 2 - Anti-Forensics 3 - Runefs 3.1 - Creating hidden space 3.2 - Using hidden space 3.3 - TCT unclear on ext2fs specifications 4 - The Defiler's Toolkit 4.1 - Necrofile 4.1.1 - TCT locates deleted inodes 4.1.2 - Necrofile locates and eradicates deleted inodes 4.1.3 - TCT unable to locate non-existant data 4.2 - Klismafile 4.2.1 - fls listing deleted directory entries 4.2.2 - Klismafile cleaning deleted directory entries 4.2.3 - fls unable to find non-existant data 5 - Conclusion 6 - Greets 7 - References 8 - Appendix 8.1 - The Ext2fs 8.2 - runefs.tar.gz (uuencoded) 8.3 - tdt.tar.gz (uuencoded) --[ 1 - Introduction Anti-forensics: the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation. Digital forensic analysis is rapidly becoming an integral part of incident response, capitalising on a steady increase in the number of trained forensic investigators and forensic toolkits available. Strangly, despite the increased interest in, and focus on, forensics within the information security industry, there is surprisingly little discussion of anti-forensics. In an attempt to remedy the lack of coverage in the literature, this article presents anti-forensic strategies to defeat digital forensic analysis on Unix file systems. Included are example implementations of these strategies targeting the most common Linux file system -- ext2fs. To facilitate a useful discussion of anti-forensic strategies it is important that the reader possess certain background information. In particular, the understanding of anti-forensic file system sanitization requires the comprehension of basic Unix file system organisation. And, of course, the understanding of any anti-forensic theory demands at least a rudimentary grasp of digital forensic methodology and practise. This article provides a limited introduction to both Unix file systems and digital forensics. Space constraints, however, limit the amount of coverage available to these topics, and the interested reader is directed to the references, which discuss them in greater depth. ----[ 1.1 - Generic Unix File Systems This section will describe basic Unix file system theory (not focussing on any specific implementation), discussing the meta-data structures used to organise the file system internally. Files within the Unix OS are continuous streams of bytes of arbitrary length and are the main abstraction used for I/O. This article will focus on files in the more general sense of data stored on disk and organised by a file system. The data on a disk compriising a Unix file systems is commonly divided into two groups, information about the files and the data within the files. The organizational and accounting information (normally only visible only to the kernel) is called "meta-data", and includes the super-block, inodes and directory files. The content stored in the files is simply called "data". To create the abstraction of a file the kernel has to transparently translate data stored across one or more sectors on a hard disk into a seemless stream of bytes. The file system is used to keep track of which, and in what order, these sectors should be group together into a file. Additionally, these sector groups need to be kept seperate, and individually distinguishable to the operating system. For this reason there are several types of meta-data, each responsible for accomplishing one of these various tasks. The content of a file is stored on data blocks which are logical clusters of hard disk sectors. The higher the number of sectors per data block the faster the speed of the disk I/O, improving the file system's performance. At the same time, the larger the data blocks the larger the disk space wasted for files which don't end on block boundaries. Modern file systems typically compromise with block size of 4096 or 8192 bytes, and combat the disk wastage with "fragments" (something not dealt with here). The portion of the disk dedicated to the data blocks is organised as an array, and blocks are referred to by their offsets within this array. The state of a given block, i.e. free vs. allocated, is stored in a bitmap called the "block bitmap". Data blocks are clustered and organised into files by inodes. Inodes are the meta-data structure which represent the user visible files; one for each unique file. Each inode contains an array of block pointers (that is, indexes into the data block array) and various other information about the file. This additional information about the file includes: the UID; GID; size; permissions; modification/access/creation (MAC) times, and some other data. The limited amount of space available to inodes means the the block pointer array can only contain a small number of pointers. To allow file sizes to be of substantial length, inodes employ "indirect blocks". An indirect block acts as an extension to the block array, storing additional pointers. Doubly and trebly indirect blocks contain block pointers to further indirect blocks, and doubly indirect blocks respectively. Inodes are stored in an array called the inode table, and are referred to by their 0-based indexes within this table. The state of an inode, i.e. free vs. allocated, is stored in a bitmap called, imaginitively, the "inode bitmap". Files, that is, inodes, are associated with file names by special structures called directory entries stored within directory files. These structures are stored contigously inside the directory file. Directory entries have a basic structure of: struct dirent { int inode; short rec_size; short name_len; char file_name[NAME_LEN]; }; The 'inode' element of the dirent contains the inode number which is linked with the file name, stored in 'file_name'. To save space, the actual length of the file name is recorded in 'name_len' and the remaining space in the file_name array is used by the next directory entry structure. The size of a dirent is usually rounded up to the closest power of two, and this size is stored in 'rec_size'. When a file name/inode link is removed, the inode value is set to 0 and the rec_size of the preceding dirent is extended to encompass the deleted dirent. This has the effect of storing the names of deleted files inside directory files. Everytime an file name is linked with a file name, and internal counter within the inode is incremented. Likewise, everytime a link is removed, this counter is decremented. When this counter reaches 0, there are no references to the inode from within the directory structure; the file is deleted. Files which have been deleted can safely have their resources, the data blocks and the inode itself, freed. This is accomplished by marking the appropriate bitmaps. Directories files themselves are logically organised as a tree starting from a root directory. This root directory file is associated with a known inode (inode 2) so that the kernel can locate it, and mount the file system. To mount a file system the kernel needs to know the size and locations of the meta-data. The first piece of meta-data, the super block, is stored at a known location. The super-block contains information such as the number of inodes and blocks, the size of a block, and a great deal of additional information. Based on the data within the super block, the kernel is able to calculate the locations and sizes of the inode table and the data portion of the disk. For performance reasons, no modern file system actually has just one inode table and one block array. Rather inodes and blocks are clustered together in groups spread out across the disk. These groups usually contain private bitmaps for their inodes and blocks, as well as copies of the superblock to aid recovery in case of catastrophic data loss. Thus concludes the whirlwind tour of a generic unix file system. A specific implementation is described in Appendix A: The Second Extended File System. The next section will provide an introduction to digital file system forensics. ----[ 1.2 - Forensics Digital forensic analysis on a file system is conducted to gather evidence for some purpose. As stated previously, this purpose is irrelevant to this discussion because anti-forensics theory shouldn't rely on the intended use of the evidence; it should focus on preventing the evidence from being gathered. That being said, ignorance as to the reasons behind an analysis provides no benefit, so we will examine the two primary motivators behind an investigation. The purpose of an incident response analysis of a file system is either casual, or legal. These terms are not the standard means to describing motives and because there are significant differences between the two, some explanation is in order. Legal investigations are to aid a criminal prosecution. The strict requirements on evidence to be submitted to a court of law make subversion of a legal forensic investigations fairly easy. For instance, merely overwriting the file system with random data is sufficient to demonstrate that none of the data gathered is reliable enough for submission as evidence. Casual investigations do not have as their goal the criminal prosecution of an individual. The investigation is executed because of interest on the part of the forensic analyst, and so the techniques, tools and methodology used are more liberally inclined. Subverting a casual forensic analysis requires more effort and skill because there are no strict third party requirements regarding the quality or quantity of evidence. Regardless of the intent of the forensics investigation, the steps followed are essentially the same: * the file system needs to be captured * the information contained on it gathered * this data parsed into evidence * this evidence examined. This evidence is both file content (data), and information about the file(s) (meta-data). Based on the evidence retrieved from the file system the investigator will attempt to: * gather information about the individual(s) involved [who] * determine the exact nature of events that transpired [what] * construct a timeline of events [when] * discover what tools or exploits where used [how] As an example to how the forensics process works, the example of the recovery of a deleted file will be presented. A file is deleted on a Unix file system by decrementing the inode's internal link count to 0. This is accomplished by removing all directory entry file name inode pairs. When the inode is deleted, the kernel will mark is resources as available for use by other files -- and that is all. The inode will still contain all of the data about the file which it referenced, and the data blocks it points to will still contain file content. This remains the case until they have been reallocated, and reused; overwriting this residual data. Given this dismal state of affairs, recovering a deleted file is trivial for the forensic analyst. Simply searching for inodes which have some data (i.e. are not virgin inodes), but have a link count of 0 reveals all deleted inodes. The block pointers can then be followed up and the file contents (hopefully) recovered. Even without the file content, a forensic analyst can learn much about what happened on a file system with only the meta-data present in the directory entries and inodes. This meta-data is not accessable through the kernel system call interface and thus is not alterable by normal system tools (this is not strictly true, but is accurate enough from a forensics POV). Unfortunately, accomplishing this is extremely difficult, if not impossible, when the forensic analyst is faced with a hostile anti-forensics agent. The digital forensics industry has had an easy time of late due to the near absense of anti-forensics information and tools, but that is (obviously) about to change. --[ 2 - Anti-Forensics In the previous section forensic analysis was outlined, and means of subverting the forensic process were hinted at, this section will expand on anti-forensic theory. Anti-forensics is the attempt to mitigate the quantity and quality of information that an investigator can examine. At each steps of the analysis, the forensics process is vulnerable to attack and subversion. This article focuses primarily on subverting the data gathering phase of a digital forensics investigation, with two mechanisms being detailed here: the first is data destruction, and the second data hiding. Some mention will also be given to exploiting vulnerabilities throughout the analytic process. The digital forensics process is extremely vulnerable to subversion when raw data (e.g. a bit copy of a file system) is converted into evidence (e.g. emails). This conversion process is vulnerable at almost every step, usually because of an abstraction that is performed on the data. When an abstraction layer is encountered, details are lost, and details *are* data. Abstractions remove data, and this creates gaps in the evidence which can be exploit. But abstractions are not the only source of error during a forensic analysis, the tools used are themselves frequently flawed and imperfect. Bugs in the implementations of forensic tools provide even greater oppurtunities for exploitation by anti-forensic agents. There is little that a remote anti-forensics agent can do to prevent the file system from being captured, and so focus has been given to exploiting the next phase of a forensic investigation -- preventing the evidence from being gathered off the file system. Halting data aquisition can be accomplished by either of two primary mechanisms: data destruction and data hiding. Of the two methods, data destruction is the most reliable, leaving nothing behind for the investigator to analyse. Data destruction provides a means of securely removing all trace of the existance of evidence, effectively covering tracks. Data hiding, on the other hand, is useful only so long as the analyst doesn't know where to look. Long term integrity of the data storage area cannot be garaunteed. For this reason, data hiding should be used in combination with attacks against the parsing phase (e.g. proprietary file formats), and against the examination phase (e.g. encryption). Data hiding is most useful in the case of essential data which must be stored for some length of time (e.g. photographs of young women in artistic poses). The two toolkits which accompany this article provide demonstration implementations of both data destruction, and data hiding methodologies. The toolkits will be used to provide examples when examining data destruction and hiding in greater detail below. The first anti-forensics methodology that will be examined in depth is data hiding. --[ 3 - Runefs The most common toolkit for Unix forensic file system analysis is "The Coronor's Toolkit"[1] (TCT) developed by Dan Farmer and Wietse Venema. Despite being relied on for years as the mainstay of the Unix digital forensic analyst, and providing the basis for several enhancements [2][3], it remains as flawed today as when it was first released. A major file system implementation bug allows an attacker to store arbitrary amounts of data in a location which the TCT tools cannot examine. The TCT implementations of the Berkley Fast File System (FFS or sometimes UFS), and the Second Extended File System (ext2fs), fail to correctly reproduce the file system specifications. TCT makes the incorrect assumption that no data blocks can be allocated to an inode before the root inode; failing to take into account the bad blocks inode. Historically, the bad blocks inode was used to reference data blocks occupying bad sectors of the hard disk, preventing these blocks from being used by live files. The FFS has deprecated the bad blocks inode, preventing the successful exploitation of this bug, but it is still in use on ext2fs. Successfully exploiting a file system data hiding attack means, for an anti-forensics agent, manipulating the file system without altering it outside of the specifications implemented in the file system checker: fsck. Although, it is interesting to note that no forensic analysis methodology uses fsck to ensure that the file system has not been radically altered. The ext2fs fsck still uses the bad blocks inode for bad block referencing, and so it allows any number of blocks to be allocated to the inode. Unfortunately, the TCT file system code does not recognise the bad blocks inode as within the scope of an investigation. The bad blocks inode bug is easy to spot, and should be trivial to correct. Scattered throughout the file system code of the TCT package (and the related toolkit TASK) is the following errorneous check: /* * Sanity check. */ if (inum < EXT2_ROOT_INO || inum > ext2fs->fs.s_inodes_count) error("invalid inode number: %lu", (ULONG) inum); The first inode that can allocate block resources on a ext2 file system is in fact the bad blocks inode (inode 1) -- *not* the root inode (inode 2). Because of this mis-implementation of the ext2fs it is possible to store data on blocks allocated to the bad blocks inode and have it hidden from an analyst using TCT or TASK. To illustrate the severity of this attack the following examples demonstrate using the accompanying runefs toolkit to: create hidden storage space; copy data to and from this area, and show how this area remains secure from a forensic analyst. ----[ 3.1 - Example: Creating hidden space # df -k /dev/hda6 Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda6 1011928 20 960504 1% /mnt # ./bin/mkrune -v /dev/hda6 +++ bb_blk +++ bb_blk->start = 33275 bb_blk->end = 65535 bb_blk->group = 1 bb_blk->size = 32261 +++ rune size: 126M # df -k /dev/hda6 Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda6 1011928 129196 831328 14% /mnt # e2fsck -f /dev/hda6 e2fsck 1.26 (3-Feb-2002) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/hda6: 11/128768 files (0.0% non-contiguous), 36349/257032 blocks # This first example demonstrates the allocation of 126 megabytes of disk space for the hidden storage area, showing how this loss of available disk space is registered by the kernel. It is also evident that the hidden storage area does not break the specifications of the ext2 file system -- fsck has no complaints. ----[ 3.2 - Example: Using the hidden space # cat readme.tools | ./bin/runewr /dev/hda6 # ./bin/runerd /dev/hda6 > f # diff f readme.tools # This second example shows how data can be inserted and extracted from the hidden storage space without any data loss. While this example does not comprehensively explore the uses of a hidden data storage area, it is sufficient to demonstrate how data can be introduced to and extracted from the runefs. ----[ 3.3 - Example: TCT incorrect ext2fs implementation # ./icat /dev/hda6 1 /icat: invalid inode number: 1 # This last example illustrates how the forensic analyst is incapable of finding this storage area with the TCT tools. Clearly, there are many problems raised when the file system being examined has not been correctly implemented in the tools used. Interesting as these examples are, there are problems with this runefs. This implementation of runefs is crude and old (it was written in November 2000), and it does not natively support encryption. The current version of runefs is a dynamicly resizeable file system which supports a full directory structure, is fully encrypted, and can grow up to four gigabytes in size (it is private, and not will be made available to the public). The final problem with this runefs in particular, and the private implementation as well, is that the bad blocks data hiding technique is now public knowledge (quite obviously). This highlights the problem with data hiding techniques, they become out dated. For this reason data hiding should always be used in conjunction with at least one other anti-forensics technology, such as encryption. There are more ways of securely storing data on the file system far from the prying eyes of the forensic analyst, and a research paper is due shortly that will detail many of them. However, this is the last this article will mention on data hiding, now the focus shifts to data destruction. --[ 4 - The Defiler's Toolkit The file system (supposedly) contains a record of file I/O activity on a computer and forensic analysts attempt to extract this record for examination. Aside from their forensic tools incorrectly reporting on the data, these tools are useless if the data is not there to be reported on. This section will present methodologies for thoroughly eradicating evidence on a file system. These methodologies have been implemented in The Defiler's Toolkit (TDT) which accompanies this article. The major vulnerablity with data aquisition is that the evidence being gathered must be there when the forensic analyst begins his investigation. Non-existant data, obviously, cannot be gathered, and without this crucial information the forensic analyst is incapable of progressing the investigation. File system sanitization is the anti-forensic strategy of removing this data (evidence), and doing so in such a way so as to leave no trace that evidence ever existed (i.e. leave no "evidence of erasure"). The Defiler's Toolkit provides tools to remove data from the file system with surgical precision. By selectively eradicating the data which might become evidence, the anti-forensics agent is able to subvert the entire forensics process before it is even begun. Within a Unix file system all of the following places will contain traces of the existence of a file -- they contain evidence: * inodes * directory entries * data blocks Unfortunately, most secure deletion tools will only remove evidence from data blocks, leaving inodes and directory entries untouched. Included with this article is an example implementation of an anti-forensic toolkit which performs complete file system sanitization. The Defiler's Toolkit provides two tools, necrofile and klismafile, which, combined, securely eliminate all trace of a file's existance. The Defiler's Toolkit consists of two complimentary tools, necrofile and klismafile. Their design goals and implementation are described here. ----[ 4.1 - Necrofile Necrofile is a sophisicated dirty inode selection and eradication tool. It can be used to list all dirty inodes meeting certain deletion time criteria, and then scrub those inodes clean. These clean inodes provide no evidence for the forensic analyst investigating the file system contained on that disk. Necrofile has some built in capabilities to securely delete all content on the data blocks referenced by the dirty inode. However, this is not the ideal use of the tool because of the race conditions which afflict all tools handling file system resources without the blessing of the kernel. When necrofile is invoked, it is supplied with a file system to search, and a number of criteria be used to determine whether a given dirty inode should be scrubbed clean. As necrofile iterates through the inode table, it check the state of each inode, with dirty inodes being given extra attention. All dirty inodes that meet the time criteria are written back to the inode table as virgin inodes, and the iteration continues. ------[ 4.1.1 - Example: TCT locates deleted inodes # ./ils /dev/hda6 class|host|device|start_time ils|XXX|/dev/hda6|1026771982 st_ino|st_alloc|st_uid|st_gid|st_mtime|st_atime|st_ctime|st_dtime|st_mode|\ st_nlink|st_size|st_block0|st_block1 12|f|0|0|1026771841|1026771796|1026771958|1026771958|100644|0|86|545|0 13|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|546|0 14|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|547|0 15|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|548|0 16|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|549|0 17|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|550|0 18|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|551|0 19|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|552|0 20|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|553|0 21|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|554|0 22|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|555|0 23|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|556|0 24|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|557|0 25|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|558|0 26|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|559|0 27|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|560|0 28|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|561|0 29|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|562|0 30|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|563|0 31|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|564|0 32|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|565|0 33|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|566|0 34|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|567|0 35|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|568|0 36|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|569|0 37|f|0|0|1026771842|1026771796|1026771958|1026771958|100644|0|86|570|0 # ------[ 4.1.2 - Example: necrofile locates and eradicates deleted inodes # ./necrofile -v -v -v -v /dev/hda6 Scrubbing device: /dev/hda6 12 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 13 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 14 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 15 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 16 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 17 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 18 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 19 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 20 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 21 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 22 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 23 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 24 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 25 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 26 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 27 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 28 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 29 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 30 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 31 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 32 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 33 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 34 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 35 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 36 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f 37 = m: 0x3d334d4d a: 0x3d334d4d c: 0x3d334d4f d: 0x3d334d4f # ------[ 4.1.3 - Example: TCT unable to locate non-existant data # ./ils /dev/hda6 class|host|device|start_time ils|XXX|/dev/hda6|1026772140 st_ino|st_alloc|st_uid|st_gid|st_mtime|st_atime|st_ctime|st_dtime|st_mode|\ st_nlink|st_size|st_block0|st_block1 # Little explanation is necessary with these examples. The "ils" tool is part of TCT and lists deleted inodes for potential recovery. The necrofile tool is being run in its most verbose form, as it locates and overwrites the same inodes found by ils. Necrofile is more effective, however, when used to target inodes deleted during specific time slices, leaving all other deleted inodes untouched. This tactic eliminates evidence of erasure, i.e. indications that evidence has been removed. After the deleted inodes have been converted into virgin inodes, ils is justifiably incapable of finding them. After removing the inodes which contain valuable forensic data, the other location which needs to be sanitized is the directory entries. ----[ 4.2 - Klismafile Klismafile provides a means of securely overwriting deleted directory entries. When a file name/inode link is terminated, the content of the directory entry is not overwritten; simply included in the slack space of the preceeding entry. Klismafile will search a directory file for these "deleted" entries, and overwrite them. Regular expressions can be used to limit the number of directory entries removed. When klismafile is invoked, it is provided with a directory file to search, and can optionally recurse through all other directory files it encounters. Klismafile will iterate through the directory entries, and search for dirents which have been deleted. When it encounters a deleted dirent, klismafile will compare the 'file_name' against any regular expressions provided by the invoker (the default is '*'). If there is a match, klismafile will overwrite the dirent with zeroes. Klismafile is not a completely secure solution. A skilled forensic analyst will note that the preceeding directory entry's rec_len field is larger than it should be, and could infer than a tool such as klismafile has artificially manipulated the directory file's contents. Currently, there are no tools which perform this check, however that will no doubt change soon. ------[ 4.2.1 - Example: fls listing deleted directory entries # ./fls -d /dev/hda6 2 ? * 0: a ? * 0: b ? * 0: c ? * 0: d ? * 0: e ? * 0: f ? * 0: g ? * 0: h ? * 0: i ? * 0: j ? * 0: k ? * 0: l ? * 0: m ? * 0: n ? * 0: o ? * 0: p ? * 0: q ? * 0: r ? * 0: s ? * 0: t ? * 0: u ? * 0: v ? * 0: w ? * 0: x ? * 0: y ? * 0: z # ------[ 4.2.2 - Example: Klismafile cleaning deleted directory entries # ./klismafile -v /mnt Scrubbing device: /dev/hda6 cleansing / -> a -> b -> c -> d -> e -> f -> g -> h -> i -> j -> k -> l -> m -> n -> o -> p -> q -> r -> s -> t -> u -> v -> w -> x -> y -> z Total files found: 29 Directories checked: 1 Dirents removed : 26 # ------[ 4.2.3 - Example: fls unable to find non-existant data # ./fls -d /dev/hda6 2 # These examples speak for themselves. The 'fls' utility is part of the TCT-UTILS package, and is intended to examine directory files. In this case, it is listing all deleted directory entries in the root directory of the file system. Klismafile is then run in verbose mode, listing and overwriting each directory entry it encounters. After klismafile, fls is incapable of noting that anything is amiss within the directory file. Note: The linux 2.4 kernel caches directories in kernel memory, rather than immediately updating the file system on disk. Because of this, the directory file that klismafile examines and attempts to clean might not be current, or the changes made might get overwritten by the kernel. Usually, performing disk activity in another directory will flush the cache, allowing kilsmafile to work optimally. The Defiler's Toolkit has been written as a proof of concept utility to demonstrate the inherent flaws with all current digital forensic methodologies and techniques. The toolkit successfully accomplishes the goals for which it was designed; proving that forensic analysis after an intrusion is highly suspect without significant prior preparation of the targeted computers. --[ 5 - Conclusion Digital forensic tools are buggy, error prone and inherently flawed. Despite these short comings they are being relied on more and more frequently to investigate computer break-ins. Given that this fundamentally broken software plays such a key role in incident response, it is somewhat surprising that no-one has documented anti-forensic techniques, nor sort to develop counter-measures (anti-anti-forensics). Some suggestions regarding anti-anti-forensics methodology are presented here, to provide the security community a foothold in the struggle against anti-forensics. The Defilers Toolkit directly modifies the file system to eliminate evidence inserted by the operating system during run time. The way to defeat the defiler's toolkit is to not rely on the local file system as the only record of disk operations. For instance, make a duplicate record of the file system modifications and store this record in a secure place. The simplest solution would be to have all inode updates be written to a log file located on a seperate box. A trivial addition to the kernel vfs layer, and a syslog server would be more than adequate for a first generation anti-anti-fornesics tool. The only means of effectively counteracting an anti-forensics attack is to prepare for such an eventuality prior to an incident. However, without the tools to make such preparation effective, the computing public is left vulnerable to attackers whose anonymity is assured. This article is intended as a goad to prod the security industry into developing effective tools. Hopefully the next generation of digital forensic investigating tookits will give the defenders something reliable with which to effectively combat the attackers. --[ 6 - Greets Shout outs to my homies! East Side: stealth, scut, silvio, skyper, smiler, halvar, acpizer, gera West Side: blaadd, pug, srk, phuggins, fooboo, will, joe Up Town: mammon_, a_p, _dose Down Town: Grendel, PhD. --[ 7 - References: [1] Dan Farmer, Wietse Venema "TCT" www.fish.com/security [2] Brian Carrier "TCTUTILS" www.cerias.purdue.edu/homes/carrier/forensics [3] Brian Carrier "TASK" www.cerias.purdue.edu/homes/carrier/forensics [4] Theodore T'so "e2fsprogs" e2fsprogs.sourceforge.net --[ 8 - APPENDIX A ----[ 8.1 - Ext2fs In the honored phrack tradition of commented header files, here is a guide to the second extended file system. The second extended file system (ext2fs) is the standard file system on the Linux OS. This paper will provide an introduction to the file system. Reading this document is no substitute for reading the src, both in the kernel and in the ext2fs library. What follows is a bottom up description of the ext2 file system; starting with blocks and inodes and concluding, ultimately, with directories. . o O ( B L O C K S ) O o . The basic component of the file system is the data block, used to store file content. Typically, the smallest addressable unit on a hard disk is a sector (512 bytes), but this is too small for decent I/O rates. To increase performance multiple sectors are clustered together and treated as one unit: the data block. The typical block size on an ext2fs system is 4096 bytes; however, it can be 2048 bytes or even as small as 1024 (8, 4 and 2 sectors, respectively). . o O ( I N O D E S ) O o . The second core part of the file system, the inode, is the heart of the Unix file system. It contains the meta-data about each file including: pointers to the data blocks, file permissions, size, owner, group and other vital peices of information. The format of an ext2 inode is as follows: --------------------------------------------------------------------------- struct ext2_inode { __u16 i_mode; /* File mode */ __u16 i_uid; /* Owner Uid */ __u32 i_size; /* Size in bytes */ __u32 i_atime; /* Access time */ __u32 i_ctime; /* Creation time */ __u32 i_mtime; /* Modification time */ __u32 i_dtime; /* Deletion Time */ __u16 i_gid; /* Group Id */ __u16 i_links_count; /* Links count */ __u32 i_blocks; /* Blocks count */ __u32 i_flags; /* File flags */ union { struct { __u32 l_i_reserved1; } linux1; struct { __u32 h_i_translator; } hurd1; struct { __u32 m_i_reserved1; } masix1; } osd1; /* OS dependent 1 */ __u32 i_block[EXT2_N_BLOCKS];/* Pointers to blocks */ __u32 i_version; /* File version (for NFS) */ __u32 i_file_acl; /* File ACL */ __u32 i_dir_acl; /* Directory ACL */ __u32 i_faddr; /* Fragment address */ union { struct { __u8 l_i_frag; /* Fragment number */ __u8 l_i_fsize; /* Fragment size */ __u16 i_pad1; __u32 l_i_reserved2[2]; } linux2; struct { __u8 h_i_frag; /* Fragment number */ __u8 h_i_fsize; /* Fragment size */ __u16 h_i_mode_high; __u16 h_i_uid_high; __u16 h_i_gid_high; __u32 h_i_author; } hurd2; struct { __u8 m_i_frag; /* Fragment number */ __u8 m_i_fsize; /* Fragment size */ __u16 m_pad1; __u32 m_i_reserved2[2]; } masix2; } osd2; /* OS dependent 2 */ }; --------------------------------------------------------------------------- The two unions exist because the ext2fs is intended to be used on several operating systems that provide slightly differing features in their implementations. Aside from exceptional cases, the only elements of the unions that matter are the Linux structs: linux1 and linux2. These can simply be treated as padding as their contents are ignored in current implementations of ext2fs. The usage of the rest of the inode's values are described below.