Why it isn’t “useless” to defragment SSDs

Has anyone tried searching for information on how to defragment stuff and run across a snarky remark that defragmenting on an SSD is pointless?

Well, I have. One time too often, it turns out.

So I am sitting down to type this response: it can be useful and it can be necessary under certain circumstances. Now, I’ll concede that the original purpose of defragmentation — making sure that a spinning hard drive could access contiguous clusters on a cylinder — is not relevant to SSDs. But that doesn’t make it pointless or useless or otherwise an exercise in futility per-se.

I have at least one use case where it is absolutely necessary to defragment a partition to ensure individual files are contiguous: the iODD and Zalman 1 drive enclosures that enable one to emulate optical disk drives based on ISO files in a host-agnostic fashion 2. The reason is simple: the firmware of the enclosure needs to parse the MFT — I exclusively use NTFS-formatted drives as the alternative is FAT32 — and requires that each file be contiguous.

That’s not too much asked, right? Such a firmware is probably subject to plenty of constraints and while it would be brilliant if it could cope with fragmented files if the number of fragments was low enough (say, below ten), this isn’t currently the case.

And lo and behold, the above use case also happens to be a valid use case for NTFS on Linux and the desire to defragment NTFS partitions from Linux. Brilliant.

// Oliver

PS: happy new year.

  1. also technically iODD but rebranded as Zalman[]
  2. i.e. the host needn’t run any code as was the case with the ISOstick or is the case with certain other solutions …[]
This entry was posted in /dev/null, EN, Linux, Rant. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *