[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: copying large files between filesystems

Andrew Scott wrote:
I'm currently on Fedora core 2 with a 2.6.6 kernel. I'll have to try unarchiving to another filesystem. I'm runnnig badblocks on the drive right now. I've freed up enough space on the drive to uncompress in place, but that failed with I/O erros. Then I tried the bzip2recover program with absolutely horrible results as it creates over 2000 9K bzip2 files representing each 9k block in the archive on the drive which was so taxing on the drive that it caused it cough and spit.

Gah! Immediately run "mount mnt -o ro" (replace "mnt" with your mount point or device file).

You don't want to be writing to this drive *at all* if you're trying to recover data from it!

Any ideas how to do a really slow read from a drive that might prove more accurate (less taxing) on the hardware? I've tried dd and am now thinking about resorting to running strings on the device and piping it to another filesystem, but that will probably still have errors in the resulting file.

Yeah, run

hdparm -d0 /dev/drive

That will turn off dma access to the drive which will slow access to the drive, and your entire system in general, but that's what you want right now...

and then:

dd bs=1 if=your-file-to-recover of=file-on-a-different-drive

this will copy your file one byte at a time, creating more processing overhead which will slow you down.

Obviously, I don't know of any tools that rate limit file copying, except for maybe rsync, but I'm not sure about that either.

I emailed the guys at Namesys (reiserfs headquarters in Oakland, CA). They have a standing offer of "Ask any questions for $25". I sent them $25 and asked them a question. Hans Reiser got back to me as well as another employee, both with good suggestions. They suspected the hardware immediately. They made one really keen suggestion: if the bit count is identical on the original as the copy (when copied to another filsystem), but the md5sums are different, then try and run bindiff on the two files and use a binary editor to toggle the differing bits, with the goal of a correct md5sum match. I imagine this will the last thing

that's nice, but don't try that on the entire 2gb file, split it up first...

I try before sending the disk off for disk recovery.

Anyway, thanks a lot for your time and thoughts. What a pain in the ass.

Yep, anyone wonder why people like RAID?


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]