2009 Is the Year for ZFS

ZFS will come of age in 2009.

In 2008, I had to explain what ZFS was and why it’s different to the existing volume/filesystem model. By the end of 2009, IT Pros will all be aware of it, what it does and will likely have at least a little of it in their production infrastructure. Sun has already started that ball rolling.

Heck, with full-blown support of ZFS likely to be in OSX 10.6, “Snow Leopard”, it’ll even make in-roads to the home-market. Or course, if Apple announce a ZFS-based upgraded Time Capsule/Home Server at MacWorld ExPo tomorrow that’ll happen sooner.

From a capacity perspective, with 2TB or greater drives being the de-facto standard capacity by the end of 2009 (compared with 1TB today), the growth of all types of media; including photographs, personal video and the increasing availability of internet-distributed hi-def content, coupled with the pack-rat nature of most of us (me included), demand for storage capacity has never been higher. It’ll also push the more mainstream storage user towards the 10-12TB Unrecoverable Read Error issue , aka known as the death of RAID5

To deal with increasing capacity and the straight line graph of bit-error rates, drive manufacturers keep making their drives smarter to handle errors and attempt to minimize data loss. This is the wrong approach, but it’s unavoidable as otherwise they commoditize themselves further. Drives should be stupid and let something further up the stack manage this. That something is ZFS.

Of course, this mostly applies to cheaper SATA drives. The more expensive UltraSCSI and SAS commanded a premium for performance and reliability. Move reliability into the filesystem and you’re just paying a premium for performance. Obviously, some need the performance which is why these drives and ancillary equipment and technologies like Fibre Channel will stay around, but I think it’s worth considering if you really need that CLARiiON or Symmetrix