


F_FULLSYNC and F_BARRIERFSYNC are different, but they both might as well be variants of the Linux syncfs().įor good measure, let us look at how this is done on the MacOS ZFS driver: The fsync() operation is operating at the level of the mount point, not the individual file. Here are the relevant pieces of code in HFS+:įirst, let me start with saying this merits a faceplam. Unfortunately, the apfs driver does not appear to be open source, but the HFS+ driver is. That said, let’s look at how this actually works on Mac OS. If you did not have heavy background file writes and F_FULLSYNC really is equivalent to syncfs(), you would not be able to tell the difference in your tests. fsync() would only touch the chosen files while syncfs() would touch both.

Unfortunately, you would only see a difference between Linux fsync() and Linux syncfs() if you have files being asynchronously written at the same time as the files that are subject to fsync()/syncfs(). What you say is that F_FULLFSYNC is the same as Linux fsync() and your performance numbers back that up. This sounds like the Linux fsync() and Linux syncfs() respectively. Dare I say, it is a keeper.I just looked into this, since what you say and what Apple’s documentation says are two different things.į_FULLFSYNC: Drive flush its cache to disk Now perhaps as a home user I don't really need a SSD L2ARC cache but it was nice to have and did speed up ZEVO's disk accesses to my large media collections (flac and MKV).Ħ) The most important point is, although L2ARC is currently not working for me and I can't do scrubs, it hasn't blue screened on my and is very fast and stable in day-to-day operations. My finder options only have the 'primary' mount point displayed with the Finder.ĥ) My L2ARC SSD failed to be used - the cache wasn't accessed and didn't grow like it does with ZEVO.

It was almost like the finder was adding mounts and deleting mounts in parallel. Things went haywire with the Finder while the backup pool was imported (with the same filesystems names but different pool names). I cancelled the scrub.Ĥ) I have a 2nd identical 8x3Tb raidz2 backup pool (called backup, naturally) that I connect once a week and rsync the data from the primary pool to the backup pool. I let it run for about 8 hours and the speed didn't improve. ZEVO will scrub in about 30 hours, OpenZFS on OS X was estimating 225 hours. It scrubs too slow to actually finish my ~13Tb pool. This is the biggest issue for me, for i am a flac person, but make a mp3 copy for my car/out of the house roaming.ģ) ZFS scrubbing runs at a fraction (5%) of the speed of the ZEVO scrub. As I have nearly 30,000 mp3's it is a bit of a pain. Its as if it has 'reset' the location of its mp3 library because it was missing during a reboot or something. Most of the older mp3s can't be found by iTunes. It actually imported fine and reports no errors in normal day to day running.Ģ) iTunes library of mp3 files is acting very funny. I am running 10.8.5 on a 2008 Mac Pro with 32Gb of RAM. You may recall I installed one of the beta versions and gave my impressions on IRC a while back. I uninstalled ZEVO and installed OpenZFS on OS X and here are my observations:ġ) I didn't copy my data, I exported the ZFS pool, uninstalled the ZEVO software, rebooted, and installed the OpenZFS on OS X DMG file.
