Question: Recently I tried to dd from an unhealthy HDD to a file. I used dd if=/dev/sdb of=somefile bs=4096 conv=noerror,sync. My problem was that dd wasted a lot of time when it encountered a bad block. In my use case I would gladly pay with some data loss for a faster result.
Is there any way to make the error handling faster? Maybe a kernel tweak (telling the HDD to make less effort for reading a block)? Or another program?
Answer: First, for the software to use: you could try using ddrescue instead of dd.
ddrescue has a switch to do only a limited number of retries. It can also use a logfile, so it records which blocks were bad. If you later feel like doing more retries, you can use the same logfile to run ddrescue again with different options (like more retries) and it will retry only the necessary blocks.
Example usage:
# ddrescue -n /dev/sda /dev/sdb rescue.log# ddrescue -r1 /dev/sda /dev/sdb rescue.log
From the ddrescue info-page:
?-n, –no-scrape ?Skip the scraping phase. Avoids spending a lot of time ?trying to rescue the most difficult parts of the file. ?-r, –retry-passes=
Here are some additional sources to using ddrescue:
- info ddrescue
- http://www.forensicswiki.org/wiki/Ddrescue
Edit
In case the HDD itself is taking too long, you can try to enable a feature called TLER (Time Limited Error Recovery) or CCTL (Command Completion Time Limit). Not all HDDs have it, but you can use it to limit the time on the HDD controller itself. This approach can be combined with using ddrecue, of course.
Linux has a tool called smartctl (in the smartmontools package).
To check the current setting (“disabled” means an unlimited time, which you do not want):
# smartctl -l scterc /dev/sda
To set it to a fixed value (5.0 seconds in this example. Setting it to 0 disables TLER):
# smartctl -l scterc,50,50 /dev/sda
Source for TLER: http://en.wikipedia.org/wiki/TLER