These notes discuss and demonstrate how to erase hard disk drives (HDDs) storing personal and S0HO data. They consider old-school but prevalent magnetic drives; the story for flash drives (UFDs and SSDs) is somewhat different. They do not address data under fiduciary management, which may be subject to legal compliance mandates such as HIPAA, PIPEDA, GLBA, and SOX.
Erasing an HDD means overwriting each of the drive's sectors with meaningless data so that no user data and no filesystem data remain on the device. Erasing utilities fill the sectors with zeros, random bits, or some other bit pattern to obliterate any existing data. Typically, an entire drive is erased in preparation for donation, sale, or disposal. "Sanitizing," "shredding," and "wiping" are synonyms for erasing in this context.
These notes take the view that a single overwrite of each sector suffices for SOHO data and favor zeroing in particular to aid subsequent verification. You can easily find articles on the web claiming that well-motivated and well-financed labs can recover data beneath a single overwrite by means of highly-specialized equipment in the hands of pertinacious snoopers. Yet these claims omit evidence and tend to derive from a single theoretical paper on 1996-era HDD technology. Subsequent papers refute them. Still, if you worry about such claims and prefer multiple overwrites to ease your mind, the tools discussed will accommodate you. But if your data attract the clandestine interest and commodious purses of omnipotent adversaries, then you are way out of my league.
The GNU/Linux world offers several tools that overwrite HDDs, and painlessly at that. By and large, ATA Secure Erase (SE) via hdparm is the state-of-the-art tool for wiping internal ATA (PATA or SATA) HDDs. But SE won't work for external ATA drives attached via USB or Firewire, however, unless the bridge happens to pass along native ATA commands. (eSATA drives look to be OK.) Older vintage drives (before about 2002 or 15G) don't have SE at all. For SCSI drives, SE support is optional and, ostensibly, not extensive. When SE is not available, the block-overwriting commands badblocks, shred, and scrub can do the job. And venerable dd can pitch in if you don't mind its esoteric syntax. All of these tools plod along filling sector after sector with some pattern or another. They differ primarily in what options they offer for filling as well as in various details. Patience or something else to do will also come in handy.
But first, you must attach your target HDD to a host machine running GNU/Linux from another drive, so that the target HDD becomes a secondary drive. If you want to erase the sole drive already inside a desktop or laptop, simply boot from a LiveCD or LiveUSB. But perhaps the target HDD has already been removed from its parent. If the HDD is SATA and you happen to have another computer with an eSATA port, you can attach the drive externally and use SE (I think but have not tried). You can also use SE if you happen to have a handy desktop with an extra PATA or SATA connection on the motherboard. Shut down the desktop, unplug it, and attach the HDD. (For a PATA drive, be sure to adjust its jumper to make the drive device 1 (aka slave) before connecting it to the ribbon cable.) Otherwise, you can attach the HDD externally via a USB adapter or enclosure, although you must then forgo SE. For example, this adapter from C2G works for me. (No endorsement intended.) When the host system has its own drive(s), be extra careful to specify the correct path to the target HDD, like /dev/sdb, /dev/sdc, etc. For more safety, consider disconnecting or disabling storage devices other than the system and target drives.
Or show and tell, that is. These notes demonstrate the utilities with transcripts of the various commands and their output when erasing the same Fujitsu laptop HDD. This 60GB/56GiB PATA drive is connected as the second drive on the host adapter and available as device /dev/sdb. It has 117,210,240 sectors with 512 bytes per sector:
-> hdparm -I /dev/sdb ... Configuration: ... LBA user addressable sectors: 117210240 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 57231 MBytes device size with M = 1000*1000: 60011 MBytes (60 GB) ...
Alternatively, use smartctl:
-> smartctl --info /dev/sdb ... User Capacity: 60,011,642,880 bytes [60.0 GB] Sector Size: 512 bytes logical/physical -> echo "Sectors: " $(( 60011642880/512 )) Sectors: 117210240
For convenience in the subsequent examples, let a couple of shell (Bash) variables mind these values:
-> sectors=117210240 # Number of sectors on /dev/sdb -> bytes=$(( $sectors*512 )) # Number of bytes on /dev/sdb -> echo $sectors $bytes 117210240 60011642880
Lastly, for accounting clarity, note:
-> echo "scale=2; \"GB: \"; $bytes/1000^3; \"GiB: \"; $bytes/1024^3" | bc GB: 60.01 GiB: 55.89
An ATA drive's controller has the ability to properly erase the entire disk. This feature, called ATA Secure Erase (SE), is available on SATA drives and on PATA drives (post 2001 vintage) alike. Use command hdparm in a two-step sequence to perform SE. First, lock the drive with a temporary password of your choice:
-> hdparm --security-set-pass hello /dev/sdb security_password="hello" /dev/sdb: Issuing SECURITY_SET_PASS command, password="hello", user=user, mode=high
Second, request the actual SE:
-> time hdparm --security-erase hello /dev/sdb security_password="hello" /dev/sdb: Issuing SECURITY_ERASE command, password="hello", user=user real 40m8.899s user 0m0.000s sys 0m0.003s
Once SE begins, the drive is locked until erasure completes—even if the host machine is rebooted. This tenacity provides protection against exposing user data in the event of an interruption, whether of accidental or nefarious origin.
You can ask the drive for an estimate of how long SE will take:
-> hdparm -I /dev/sdb | grep ERASE 60min for SECURITY ERASE UNIT.
Incidentally, this same request tells you if your drive offers SE at all. For example, the primary drive (/dev/sda) on my computer does not offer SE; hence:
-> hdparm -I /dev/sda | grep ERASE -> [no reply]
Some drives offer an enhanced mode of SE, too, and hdparm will tell you so. For example, this additional test drive at /dev/sdc does support enhanced erase:
-> hdparm -I /dev/sdc | grep ERASE 34min for SECURITY ERASE UNIT. 34min for ENHANCED SECURITY ERASE UNIT.
Potentially, on newer drives especially, enhanced SE is more aggressive in that it ought to wipe every sector—normal, HPA, DCO, and G-list. You're likely better off, or at least no worse off, using enhanced SE if your drive cooperates. If so, simply specify switch --security-erase-enhanced instead of --security-erase to hdparm. See the discussion on caveats below to interpret the preceding hedges and qualifiers.
Secure Erase apparently requires that the drive is directly attached to the ATA adapter. It fails, for example, if the drive is connected over a generic USB-ATA bridge—unless the bridge provides SCSI-ATA Command Translation, or SAT. (Well, the hdparm man page implies this works; it's a bridge too far for my tests.) Failure in such cases can possibly freeze the drive under some BIOS programs and perhaps even brick the HDD altogether (cf. Linux ATA wiki).
When a drive supports Secure Erase, hdparm's drive identification output lists the Security Mode feature set and includes a "Security" section:
-> hdparm -I /dev/sdb ... Commands/features: ... Security Mode feature set ... Security: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 60min for SECURITY ERASE UNIT. ...
Here, the Security Mode feature set is initially disabled, its normal state. To enable it for immediate use, set a security password:
-> hdparm --security-set-pass hello /dev/sdb security_password="hello" /dev/sdb: Issuing SECURITY_SET_PASS command, password="hello", user=user, mode=high
You can verify the change; note the absence of "not" before "enabled" in the report from hdparm:
-> hdparm -I /dev/sdb | perl -0777 -nE 'm/(Security:.+UNIT)/ms and say $1' Security: Master password revision code = 65534 supported enabled not locked not frozen not expired: security count not supported: enhanced erase Security level high 60min for SECURITY ERASE UNIT
Now, using the password above, you can tell the drive to do SE:
-> time hdparm --security-erase hello /dev/sdb security_password="hello" /dev/sdb: Issuing SECURITY_ERASE command, password="hello", user=user real 40m8.787s user 0m0.000s sys 0m0.001s
The password "hello" above is arbitrary. It is also temporary. Once erasure completes, this password is removed and the Secure Mode feature set returns to its disabled state:
-> hdparm -I /dev/sdb | perl -0777 -nE 'm/(Security:.+UNIT)/ms and say $1' Security: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 60min for SECURITY ERASE UNIT
A bridge connecting a drive externally (other than eSATA—I think!) does not pass on hdparm's request for SE. For example, here's what happens when additional test drive /dev/sdc is attached over an external USB cable rather than connected to the PATA ribbon inside the box. The drive reports that it supports SE alright, but even setting the password fails:
-> hdparm -I /dev/sdc | perl -0777 -nE 'm/(Security:.+UNIT)/ms and say $1' Security: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count supported: enhanced erase 34min for SECURITY ERASE UNIT. 34min for ENHANCED SECURITY ERASE UNIT -> hdparm --security-set-pass hello /dev/sdc security_password="hello" /dev/sdc: Issuing SECURITY_SET_PASS command, password="hello", user=user, mode=high The running kernel lacks CONFIG_IDE_TASK_IOCTL support for this device. SECURITY_SET_PASS: Invalid argument
Descriptions of Secure Erase tend to advertise its several advantages over the OS-based tools, like badblocks and shred. It's said to complete much faster and strain the host system less since the drive's controller does all the thinking and instructing. The sluggish execution of OS utilities is sometimes seen as a barrier to erasing drives, and SE's speed is thus seen to promote good sanitizing habits. (My limited examples herein, with N=2, do not show performance gains, however. Both SE and shred take about the same time to erase my middle-aged HDDs, and both lock up my graying desktop for the duration.) SE cannot be interrupted. It is even robust to power failure because it automatically and necessarily resumes on drive power-up. In contrast, any OS utility can be stopped before finishing its mission. Since SE is implemented in a drive's firmware, it is touted as more secure from malicious software attack than OS utilities are. Enhanced SE purports to erase any HPA, DCO, and reallocated sectors. Any HPA and DCO restriction must be deactivated before an OS utility can overwrite their sectors, and G-list sectors are always off-limits to OS tools.
Sounds pretty good.
Actually, there's more to the story on closer scrutiny, and using Secure Erase thereby comes with some caveats to bear in mind. SE has two possible modes, called "normal" and "enhanced." For an HDD supporting the Security Mode feature set, normal SE is required while enhanced mode is optional. When you tell hdparm to erase a drive, hdparm issues the internal ATA command SECURITY ERASE UNIT to the controller on your behalf. When you give it option --security-erase, hdparm specifies normal mode; given --security-erase-enhanced, it specifies enhanced mode. But just what this pivotal SECURITY ERASE UNIT command is supposed to do in each mode depends on the ATA version the controller supports, whether ATA-8 or earlier. And just what the command actually does further depends on its implementation. You should thus expect differences in SE that vary by the drive's vendor, model, and date of manufacture. The scoop follows.
The ATA-8 specification (Information technology - AT Attachment 8 - ATA/ATAPI Command Set (ATA8-ACS), 2008, pp. 218–219) clearly mandates what SECURITY ERASE UNIT under ATA-8 must accomplish in each mode. Still, there is leeway. Normal SE must overwrite every sector up to any DCO sector with either all zeroes or all ones, uniformly. Thus normal mode overwrites any HPA sectors. However, a vendor can implement normal mode to either ignore or wipe any DCO sectors and any reallocated sectors as it sees fit. Enhanced SE, if offered, must additionally overwrite all DCO and reallocated sectors, and it is free to use whatever patterns the vendor elects. It's perhaps useful to mention that the oft-cited NIST Special Publication 800-88, Guidelines for Media Sanitization (September 2012) refers to ATA-8 behavior of SE in its glossary entry for "Security Erase" (p. 29). Well then, so far, so good.
Next, here's how the ATA-7 specifications describes SECURITY ERASE UNIT and its modes (Information Technology - AT Attachment with Packet Interface - 7, Volume 1 - Register Delivered Command Set, Logical Register Set (ATA/ATAPI-7 V1), p. 239):
When Normal Erase mode isspecified [sic], the SECURITY ERASE UNIT command shall write binary zeroes to all user data areas. The Enhanced Erase mode is optional. When Enhanced Erase mode isspecified [sic], the device shall write predetermined data patterns to all user data areas. In Enhanced Erase mode, all previously written user data shall be overwritten, including sectors that are no longer in use due to reallocation.
The descriptions in the documents for ATA-6 (2002, p. 213), ATA-5 (2000, p. 156), and ATA-4 (1998, p. 149), use the same language—"user data areas"—as the document for ATA-7 uses, nearly verbatim. IMHO, the phrase "user data areas" is ambiguous with respect to the sectors of an HPA and perhaps even DCO. Hence it appears imprudent to blindly rely upon SE in either mode to erase any such sectors—or, conversely, to preserve them. And again, normal SE may ignore bad sectors. Conveniently enough, my two test HDDs, rescued from retired laptops, happen to demonstrate this ambiguity in regard to erasing an HPA. A Fujitsu drive supporting ATA-6 implements an unqualified SE that erases the HPA. In contrast, an Hitachi drive supporting ATA-5 advertises both normal and enhanced SE, but both modes leave the HPA intact. (I currently lack the means or know-how to test against DCO or bad blocks. And I suppose it's also possible that hdparm does not setup enhanced erase properly, although I have absolutely no reason to actually think so, especially since it need only set a single bit.)
What is the upshot of all this? If your HDD supports ATA-8, you should be OK using normal SE to erase right through an HPA and enhanced SE to additionally erase through any DCO and G-list sectors. Of course, if you want to preserve data in HPA (or DCO) sectors, you'll need to copy their data to another medium. If your drive lacks the assurances of ATA-8's clarity, to be safe and thorough you may wish to consider the same gotchas concerning HPAs, DCOs, and bad blocks that shadow OS-based erasers. Without ATA-8 or prior testing, you cannot be sure how either mode of SE will treat HPA and DCO sectors, and you cannot assume that normal SE will erase reallocated sectors. A corollary: Since enhanced mode is optional and normal mode may ignore bad sectors, this lack of guarantee to purge bad blocks deflates a touted feature of SE in general, at least for older drives. And older drives are perhaps the likeliest candidates for erasing.
Our saga closes with a parting twist. The Tutorial on Disk Drive Data Sanitization from CMMR, the folks who evaluated SE for the NSA, describe enhanced SE in the altogether different context of self-encrypted HDDs. The controllers of such HDDs encrypt all data to be stored on the drive using encryption keys saved in the controller's non-volatile memory. The sectors see only cipher text, and the OS sees only plain text. Consequently, securely erasing just the encryption keys effectively erases the entire drive instantaneously by orphaning the encrypted data. Although your data do remain on the HDD, your privacy is secured by the strength of modern encryption, presuming that strength outlives your encrypted data. At first blush to this non-cryptologist, this approach seems good enough for banal personal data. If you have sensitive data worth the cost of professional cryptanalysis, however, your privacy may be compromised should your abandoned drive find its way to prying eyes. So perhaps it's best to use regular SE on the encrypted data even after the keys have been erased. Here's a brochure for an Hitachi HDD using encryption for enhanced SE (2007), for example. (That's all I've seen on encryption-enhanced SE; I don't know if it's taken root.)
You can use badblocks (package "e2fsprogs") to overwrite a disk with a constant or with random bits and to automatically verify the erasure. This is an off-label application of badblocks, which normally works on behalf of e2fsck to proactively prod the drive into remapping bad sectors. Despite what the name badblocks may seem to connote, this command cannot access any sectors that the disk controller has removed from service; any reallocated sectors remain out of reach.
To zero the drive with verification, for example:
-> time /sbin/badblocks -b 512 -sv -w -t 0x00 /dev/sdb Checking for bad blocks in read-write mode From block 0 to 117210239 Testing with pattern 0x00: done Reading and comparing: done Pass completed, 0 bad blocks found. (0/0/0 errors) real 84m15.356s user 0m37.387s sys 4m11.973s
Here, badblocks makes two passes. The first pass writes zeroes (-w -t 0x00) into each block, and the second reads each block to verify that it consists solely of zeroes. You can choose any non-negative integer up to OS's maximum for the test pattern. You can also mandate a random pattern, and badblocks will verify that as well:
-> time /sbin/badblocks -b 512 -sv -w -t random /dev/sdb Checking for bad blocks in read-write mode From block 0 to 117210239 Testing with random pattern: done Reading and comparing: done Pass completed, 0 bad blocks found. (0/0/0 errors) real 85m43.068s user 0m48.835s sys 5m9.425s
None of the other sanitizers herein offers a verified random fill.
The examples above call badblocks with switch "-b 512" to specify 512 bytes per block instead of the default 1024 bytes. This is an accounting convenience aligning the command's reported block numbers with the LBAs for disk sectors on /dev/sdb. Each sector on /dev/sdb accommodates 512 bytes of data. A file system's notion of block, in contrast, likely comprises multiple (2, 4, or 8) disk sectors. For example:
-> hdparm -I /dev/sda | grep 'Sector size' Logical/Physical Sector size: 512 bytes -> blkid /dev/sda2 /dev/sda2: LABEL="boot" UUID="..." TYPE="ext4" -> tune2fs -l /dev/sda2 | grep 'Block size' Block size: 1024 -> blkid /dev/sda3 /dev/sda3: LABEL="home" UUID="..." TYPE="ext3" -> tune2fs -l /dev/sda3 | grep 'Block size' Block size: 4096
It's the sector that counts when erasing a drive, however, and the notions of a file system and its blocks do not apply. (The preceding illustration uses /dev/sda because that HDD hosts file systems, whereas the test drive /dev/sdb does not.)
Use the shred command (package "coreutils" ) to wipe an entire HDD. By default, shred makes three passes of random overwrites but provides options to alter this default. Here's how to zero-fill a drive in a single pass, for example:
-> time shred --verbose --iterations=0 --zero /dev/sdb shred: /dev/sdb: pass 1/1 (000000)... ... shred: /dev/sdb: pass 1/1 (000000)...56GiB/56GiB 100% real 41m53.334s user 0m0.765s sys 1m52.553s
The info page for shred suggests that a sole pass of zero-fill may backfire on a disk controller that somehow optimizes writing zero blocks in a manner that actually leaves user data exposed. The admonition particularly mentions solid-state disks (SSDs) in this regard but elaborates no further. If you wish to hedge against this risk and won't miss the uniform zeroes, you can tell shred to run a single pass of random data and to omit the zeroing pass:
-> time shred --iterations=1 /dev/sdb real 40m38.193s user 1m29.049s sys 1m50.064s
A single pass of random fill takes about the same time as a single pass of zero fill. Skipping the zeroing pass nixes verification, however. You can make both passes if you don't mind waiting:
-> time shred --iterations=1 --zero /dev/sdb real 81m13.872s user 1m31.701s sys 3m34.763s
Use the dd command with the /dev/zero pseudo-device to overwrite the disk with zeros:
time dd if=/dev/zero of=/dev/sdb bs=8b count=$(( $sectors/8 )) 14651280+0 records in 14651280+0 records out 60011642880 bytes (60 GB) copied, 2442.09 s, 24.6 MB/s real 40m42.108s user 0m9.271s sys 2m51.330s
(In argument "bs=8b," the unit "b" represents one block of 512 bytes.) Verify the counts above:
-> echo -e $(( $sectors/8 )) "\n$bytes" 14651280 60011642880
To fill the disk with random data rather than zeros, it is possible to use psuedo-device /dev/urandom instead of /dev/zero for dd's input source. Expect to wait a long time, however, apparently because /dev/urandom is not designed for this sort of high-volume output. Here's an example:
-> time dd if=/dev/urandom of=/dev/sdb bs=8b count=$(( $sectors/8 )) 14651280+0 records in 14651280+0 records out 60011642880 bytes (60 GB) copied, 24895 s, 2.4 MB/s real 414m55.089s user 0m14.613s sys 412m34.537s
Note the system time above. The related device /dev/random is an all-around bad choice for filling a drive with random data because it blocks when its underlying entropy pool empties. Device /dev/urandom does not block.
To have dd remit an interim progress report, send its process a "USR1" (or possibly "INFO") signal using kill, perhaps from a different terminal:
-> ps a | grep dd 1625 pts/1 S+ 0:30 dd if=/dev/sdb bs=4b 1728 pts/2 S+ 0:00 grep --color=auto dd -> kill -s USR1 1625
In response to this signal, dd reports something like the following in its terminal:
3442727+0 records in 3442726+0 records out 7050702848 bytes (7.1 GB) copied, 984.825 s, 7.2 MB/s
You can use scrub (package "scrub") for a one-pass zero-fill pattern, which somehow takes significantly longer than shred for the test drive:
-> time scrub --pattern fillzero /dev/sdb scrub: using Quick Fill with 0x00 patterns scrub: please verify that device size below is correct! scrub: scrubbing /dev/sdb 60011642880 bytes (~55GB) scrub: 0x00 |................................................| real 72m35.646s user 0m0.281s sys 2m40.519s
If you like, you can fill with ones (0xff) by specifying fillff instead of fillzero for the bit pattern. After wiping a drive, scrub signs its work in the first thirteen bytes:
-> hexdump -C -n 13 /dev/sdb 00000000 01 02 03 53 43 52 55 42 42 45 44 21 00 |...SCRUBBED!.| 0000000d
If given a signed drive, scrub will simply report that it has already processed the drive. This is a potentially handy feature when a stack of drives is being processed (in a secure environment). Option --no-signature instructs scrub to omit its signature, and option --force instructs scrub to overwrite a signed drive.
For a random fill, give random for the pattern and expect to wait longer:
-> scrub --force --pattern random /dev/sdb scrub: using One Random Pass patterns scrub: please verify that device size below is correct! scrub: scrubbing /dev/sdb 60011642880 bytes (~55GB) scrub: random |................................................| real 91m53.569s user 13m2.285s sys 2m56.892s
If you wish or if you must, you can also have scrub use any of several canned sequences of patterns recommended by NNSA, DoD, BSI, Guttman, Schneier, and others in various contexts. See the man page for details and references. The default sequence is NNSA NAP-14.x; you get to wait a few hours.
scrub --force /dev/sdb scrub: using NNSA NAP-14.1-C patterns scrub: please verify that device size below is correct! scrub: scrubbing /dev/sdb 60011642880 bytes (~55GB) scrub: random |................................................| scrub: random |................................................| scrub: 0x00 |................................................| scrub: verify |................................................| real 298m24.230s user 27m7.140s sys 10m57.096s
Since a single overwrite suffices for modern media, these multiple passes merely increase processing time without enhancing security.
Whatever eraser you choose, be on the lookout for potential gotchas associated with a Host Protected Area (HPA), a Device Configuration Overlay (DCA), and reallocated sectors.
The Host Protected Area feature set introduces a potential gotcha, but it is easy to see and overcome.
A system vendor may garner all sectors beyond a cutoff address to house the computer's factory image and diagnostic utilities. These reserved sectors constitute a Host Protected Area. To the eyes of the OS, the HPA is invisible and the drive ends at the last sector before the HPA. Thus all OS-based erasers stop at the HPA and leave its data undisturbed. This behavior is a feature when the HPA holds only non-private system data to retain. It's a gotcha should the HPA somehow hold user data. For Security Erase, the story is muddled. Normal mode erases HPA sectors under ATA-8 specifications but maybe not otherwise. It's perhaps prudent to assume that data to erase may be left and that data to retain may be lost.
Use hdparm to determine if your drive has an HPA in effect. For this example, the test drive has acquired an HPA of 30G starting at sector 58,605,120:
-> hdparm -N /dev/sdb /dev/sdb: max sectors = 58605120/117210240, HPA is enabled -> hdparm -I /dev/sdb | perl -0777 -nE 'm/(\tLBA.+?)cache/ms and say $1' LBA user addressable sectors: 58605120 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 28615 MBytes device size with M = 1000*1000: 30005 MBytes (30 GB)
To erase the HPA sectors, first remove the HPA restriction. Use hdparm to reset the maximum visible sectors to the full 11,7210,240 sectors:
-> hdparm -N p117210240 /dev/sdb /dev/sdb: setting max visible sectors to 117210240 (permanent) max sectors = 117210240/117210240, HPA is disabled
The initial "p" tells hdparm to make the new setting persist across reboots, rather than temporary. You must reboot to activate the setting. Now, you can use your favorite eraser as usual. With shred, for example:
-> shred --verbose --iterations=0 --zero /dev/sdb ... shred: /dev/sdb: pass 1/1 (000000)...56GiB/56GiB 100%
You can restore the HPA if you wish:
-> hdparm --yes-i-know-what-i-am-doing -N p58605120 /dev/sdb /dev/sdb: setting max visible sectors to 58605120 (permanent) max sectors = 58605120/117210240, HPA is enabled
The HPA activates on reboot, and the OS thereafter sees only the initial sectors:
-> hdparm -I /dev/sdb | perl -0777 -nE 'm/(\tLBA.+?)cache/ms and say $1' LBA user addressable sectors: 58605120 Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 28615 MBytes device size with M = 1000*1000: 30005 MBytes (30 GB)
The Device Configuration Overlay (DCO) feature set introduces a small gotcha to keep in mind. It's easily overcome, however.
On an HDD supporting Device Configuration Overlay, the drive's operational maximum LBA can be set to a value below the drive's true maximum LBA. Such a setting truncates the in-field capacity of the drive. The sectors after the DCO limit cannot be accessed by any means while the overlay is in effect. DCO is independent of HPA, and its no-exceptions limit applies to an HPA as well. Why reduce the drive's capacity? The usual explanatory scenario for DCO has a system vendor reducing different capacities of disparate HDD lots to the greatest capacity that a particular host system's specifications can accommodate.
You can use hdparm to determine if an active DCO truncates the capacity of your HDD:
-> hdparm --dco-identify /dev/sdb /dev/sdb: DCO Revision: 0x0001 The following features can be selectively disabled via DCO: Transfer modes: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 Real max sectors: 117210240 ATA command/feature sets: SMART self_test error_log security AAM HPA
A DCO can override some other configuration parameters, and this response reports the possibilities for the test drive. Only the "Real max sectors" value matters in the context of erasing drives. It shows the drive's unabridged maximum. If this value exceeds the maximum "native" value that hdparm reports with switch -N, then an overlay is active. The test drive does not have an active DCO:
hdparm -N /dev/sdb /dev/sdb: max sectors = 117210240/117210240, HPA is disabled
The truncated capacity of DCO presents a risk only if the excluded sectors happen to hold sensitive data written before the overlay took effect. As a side use, DCO offers a weak form of data protection from the prying eyes of uninitiated users that is similar to an HPA. None of the OS-based erasers can reach such data. Enhanced SE is required to overwrite any sectors set aside by DCO, but normal SE is not (cf. SE caveats). And while it's perhaps unlikely that these sectors hold any data at all, let alone your private data, you can easily remove the overlay:
-> hdparm --yes-i-know-what-i-am-doing --dco-restore /dev/sdb ... [TBD]
Besides reinstating full capacity, switch --dco-restore reinstates the factory configurations fro all of the drive's commands, modes, and feature sets subject to DCO. Be sure you can live happily with that.
Here is a pseudo-code representation of the relationships between the LBA sectors... p. 40
LBA1 from IDENTIFY DEVICE ≤ LBA2 from READ NATIVE MAX ≤ LBA3 from DEVICE CONFIGURATION IDENTIFY
The drive has an HPA if the first inequality is strict
LBA1 < LBA2), and
it has a reduced capacity of the second inequality is strict (
LBA2 < LBA3).
Postscript: I've not yet found a way to set a DCO, so I cannot experiment on my test drives even though they support DCO. One test would fill the end of a drive with non-zero data before hiding it in a DCO. Then run SE to see if it wipes beyond DEVICE CONFIGURATION SET.
Reallocation of bad physical sectors introduces a potentially pesky gotcha.
HDDs maintain a pool of spare sectors beyond those necessary to provide the stated capacity of the drive. Should the HDD encounter a faulty physical sector during a write operation, it remaps the associated LBA from that sector to a spare sector. The drive then excludes the bad sector from further use. The OS carries on blissfully ignorant of the reallocation since its file system and the drive's controller communicate in terms of the LBA abstraction. Problem solved. This silent reallocation of bad sectors brings a measure of robustness to the HDD.
The opaque nature of remapping bad sectors also brings a security risk when erasing data valuable enough for determined recovery attempts in a properly-equipped facility. Any remapped sector potentially harbors private data from the last successful write into it. There's no way to know what gets left behind because an orphaned sector's LBA and contents stay invisible to the OS. In particular, OS-based erasers cannot overwrite the reallocated sectors. Normal SE affords no sure succor here, either, because the ATA specification does not require normal mode to purge any reallocated sectors—maybe it does anyway, maybe not, you can't assume.
There is some hope. Enhanced SE is required by its ATA specification to erase bad sectors on any drive that offers it; still, enhanced SE itself is optional. While you may as well use this mode if your drive provides it, you cannot verify its work, however, because you cannot get at those orphans. You must trust that the vendor's proprietary implementation does the deed as advertised.
It's difficult to asses the practical risk of exposing fossilized data entombed in reallocated sectors. What is the likelihood that a reallocated sector holds sensitive data? What is the likelihood of extraordinary methods that really can probe abandoned sectors? What is the likelihood that an adversary tries and succeeds to uncover such secrets? And what is the cost of that discovery to you? No answers follow herein. Yet your drive's SMART feature set may give you a pass. You can use smartctl (package smartmontools) to query your drive about reallocated sectors; if no orphans, then no worries. For example:
-> smartctl -a /dev/sdb | grep -P "Device is|ID|Reallocated|Pending" Device is: In smartctl database [for details use: -P show] ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 024 Pre-fail Always - 0 (2000 0) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 (0 4348) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
The first line confirms that the Smartmontools database knows about this Fujitsu drive. The following lines together indicate that the drive is free of faulty sectors (cf. zeroes for RAW_VALUE). Hence, this HDD does not impose a risk of leaking occult data after erasure. The details of SMART attributes are not standardized and thus may vary with drive vendor.
It's not a bad idea to verify erasure. Verification means reading every sector of a drive and confirming that the retrieved contents match the expected contents. For example, verifying a zero-fill eraser means confirming that every byte of every sector is indeed zero (0x00). To verify a fixed fill pattern you can use badblocks, hexdump, od, and perhaps scrub. For a random pattern, you'll need badblocks to write and verify in a single invocation.
A easy approach is to use badblocks to erase the drive in the first place. badblocks automatically reads back each overwritten sector and confirms that what it got is what it wrote.
You can also use badblocks to verify the work of another eraser that fills sectors with a fixed pattern. For example, here badblocks verifies the pattern 0xff previously written by another utility:
-> time /sbin/badblocks -b 512 -v -t 0xff /dev/sdb Checking blocks 0 to 117210239 Checking for bad blocks in read-only mode Testing with pattern 0xff: done Pass completed, 0 bad blocks found. (0/0/0 errors) real 42m10.717s user 0m32.293s sys 2m17.490s
badblocks cannot verify a random fill written by another application, nor can it verify its own random work from a previous invocation. We are not talking your average teenager's "random" here, after all. If you require a verified random fill pattern, then you'll need to use badblocks to both erase and verify.
When the fill pattern is a constant, like 0x00 or 0xFF, then hexdump provides another easy means to check that the drive is clean. It displays the value of every byte to stdout but does so compactly when it sees runs of a repeated value. Consequently, it returns a concise confirmation of a properly erased drive. Here's an example for a zeroed drive:
-> time hexdump -C /dev/sdb 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * df8f90000 real 40m21.654s user 9m16.822s sys 1m47.435s
Here, hexdump confirms that /dev/sdb contains exactly 60,011,642,880 (0xdf8f90000) bytes and that each byte is zero. Specifically, the first output line shows that hexdump starts reading block device /dev/sdb at offset zero (0x00000000), the first byte of the first sector. This line next reports that all of the first 16 bytes are zeroes (0x00). The appended periods annotate that the corresponding hex value does not represent a printable character, because 0x00 is the non-printable null character. The asterisk on the second line compactly indicates that hexdump's report for the rest of the drive simply repeats the first line. Hence each and every byte in /dev/sdb is zero. The third line gives the byte count; let bc convert this from hexadecimal to decimal:
-> echo "ibase=16; DF8F90000" | bc 60011642880
As it turns out, hexdump offers formatting options that can be used to condense and clarify the previous report:
-> hexdump -e ' "0x%02x " "%_Ad bytes examined\n" ' /dev/sdb 0x00 * 60011642880 bytes examined
The first output line indicates that all of the bytes are zero, and the second line reports the total number of bytes examined.
You can couple dd with hexdump to get a record count and timer all in one, like so:
-> dd if=/dev/sdb bs=512 | hexdump -e '"hexdump: " "0x%02x "' hexdump: 0x00 * 117210240+0 records in 117210240+0 records out 60011642880 bytes (60 GB) copied, 2918.62 s, 20.6 MB/s
Again, "0x00 *" verifies that all bytes are zero, and the record and byte counts do the bookkeeping. To check that these counts meet expectations and to see the elapsed time in minutes:
-> echo "$sectors sectors $bytes bytes" 117210240 sectors 60011642880 bytes -> echo "scale=1; 2919/60" | bc -l 48.6
Alternatively, you can use od in place of hexdump to verify a constant pattern; they are similar. For example:
-> time od --address-radix x --format=x1z /dev/sdb 000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >................< * df8f90000 real 41m49.341s user 7m4.698s sys 1m26.039s -> echo "ibase=16; DF8F90000" | bc 60011642880
Or joining forces with dd (and abbreviating od's switches):
-> dd if=/dev/sdb bs=512 | od -A x -t x1 000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 * df8f90000 117210240+0 records in 117210240+0 records out 60011642880 bytes (60 GB) copied, 2435.77 s, 24.6 MB/s -> echo "scale=1; 2436/60" | bc -l 40.6
Several of scrub's multiple-phase patterns end with a "verify" phase, which presumably verifies the final overwrite. These patterns make three to five passes prior to the verification, however. See the man page for pattern names. By the way, the penultimate phase for each writes a constant, and hence scrub does not verify its random fill. (Use badblocks for that.)