In a nutshell?
11/4/2005 6:53:02 AM
this pretty much explains it http://www.sun.com/2004-0914/feature/here's the cliffs notes. no, i don't know anything about it - just read up on it and summarized what i found interesting.*fat32 was 32-bit, this is the first 128-bit file system*no partitions to manage: "With traditional volumes, storage is fragmented and stranded. With ZFS' common storage pool, there are no partitions to manage. The combined I/O bandwidth of all of the devices in a storage pool is always available to each file system."*"Copy-on-write design makes most disk writes sequential" - for data saving in crashes. "All operations are also copy-on-write. Live data is never overwritten. ZFS writes data to a new block before changing the data pointers and committing the write. "*better administration where commands to create volumes and stuff only take seconds rather than hours to execute because it's all virtual
11/4/2005 7:12:32 AM
Maybe I should have explained myself better. I read over that stuff. I am interested more in real world uses, or some good examples of how this makes life easier for someone, that is, maybe there are some sysadmins here?Also, will anything apparently this slick be incorporated in Windows?
11/4/2005 7:21:38 AM
Wouldn't that mean if you're saving a very large file... say 2 gig Autocad project you would need 2 gig free in the pool? Don't get me wrong just pointing out a possible problem in some situations.
11/4/2005 10:09:04 AM
11/4/2005 11:02:33 AM
^^ not sure what you're getting at, because on a normal windows system you'd need 2 GB free on the specific partition you were saving it to, and on this if it was all pooled you'd just need 2GB free total. are you trying to point out a problem with ZFS or with NTFS?
11/4/2005 1:07:33 PM
i think he means the situation where you already had a 2 GB file on the filesystem, and wanted to make a change to the whole thing. Regardless, anywhere i've heard of cow being implemented, it's default, but not required. so if you didn't have the free space to copy it on the write, it would actually only do as much as you had free space for. besides that it'll probably be doing it incrementally (a meg here, unlink the old one, another meg there, unlink that old one...etc)this is speculation, since i dont know anything about ZFS, but i do remember a little bit about how cow works.----as far as the original post goes, you might do well to research filesystems in general and not focus on an individual one until you know the basics of what features filesystems can offer.[Edited on November 4, 2005 at 1:46 PM. Reason : ]
11/4/2005 1:42:39 PM
wow, you could cut Sun's market speak with a knifemixed feelings wrt it myselfdynamic striping, depending on the implementation, could yield pretty nice performancecopy-on-write also has pretty serious performance implications... "makes most disk writes sequential" ... except small file writes, where 50% of the seeks will be non-contiguous writes to update the old pointers... the checksum crap sounds like a waste of disk space... either there is enough data to reconstruct, and they are using a lot of disk space for this... or there isn't enough data to reconstruct, and I hope for their sake it's optional, because knowing I lost data and losing the data silently are about the same to mebut basically, with bullshit that dense, only real awesome benchmarking is going to shed any light on it
11/4/2005 2:10:44 PM