These are some rough Unix tools for recovering deleted files. The Coroner's Toolkit at http://www.porcupine.org is much more thorough but has a Unix bias. This is filesystem-agnostic and has proved somewhat useful on Windows XP.

Collect the lot as fgrab.tar.

README
======

DD
A script that copies a disk image (perhaps 30 Gb)
into manageable chunks of about 0.5 Gb so that they
can be studied one by one.  The relevant size may vary
but I found this worked OK on a PC with 640 Mb RAM.

There is a small overlap between each output chunk
so that a small file cannot escape detection by being
broken over a boundary.


scan.c
This takes a filename on the command-line and makes a series of
output files according to what it appears to have found.  Each
of these files then wants some further examination - even if it's
only finding the duplicated ones and removing all but one.
Example:
    for i in  0*.img
    do
       scan $i
    done
and there may be many files produced with filenames that show
what file they are extracts from and where in that file they
were found.


allscan.c
Makefile
mk_text_finder.plx
These look for "interesting text" (defined in mk_text_finder.plx).
The could include names of people, organisations or subjects believed
to be mentioned in the data you are hoping to recover.

allscan copes with text that contains alternate NUL chars, unicode style.
When trying to get text out (not an exact word-processed file for instance)
commands like
tr  -d \\000 < file > file2
should help.

Examples:
allscan          filename    > text_file_maybe_large
allscan    -q    filename     # no output, just a return code
allscan    -qv   filename     # shows name of file if interesting
for i in  *.*
do
   allscan  -q  $i  ||  mv -i $i ../boring
done
find . -type f -size +4000000c -exec ~/src/allscan -qv {} \;