On other operating systems and in ZFS, fsync() and fdatasync() flush volatile write caches so that you can't lose recent writes if the power goes out. In UFS, they don't. High end server equipment with non-volatile cache doesn't have this problem because controller/drive cache survives (nvram, batteries, supercaps), so you might not always want this. Consumer and prosumer equipment on the other hand might only have enough power to protect its own metadata on power loss, and for cloud/rental systems with many kinds of virtualised storage, who really knows?
This is just a proof-of-concept patch to see what others think of the idea and maybe get some clues from experts about whether this is the right way to go about it. The only control for now is a rather blunt vfs.ffs.nocacheflush (following the sysctl naming pattern from ZFS), but I guess we might want a mount point option, and some smart way to figure out from geom if it's necessary.
On my SK Hynix storage in a Thinkpad, with an 8KB block size file system, I can open(O_DSYNC) and then pwrite(..., 8192, 0) about 49.2k times/sec with vfs.ffs.nocacheflush=1 (like unpatched), and about with vfs.ffs.nocacheflush=0 it drops to ~2.5k, or much lower if there is other activity on the device. That's in the right ball park based on other operating systems on similar hardware (Linux xfs, Windows ntfs). The writes appear via dtrace/dwatch -X io as WRITE followed by FLUSH.
On other OSs O_DSYNC is sometimes handled differently, using FUA writes rather than writes + flushes to avoid having to wait for other incidental data in device caches, but that seems to be a whole separate can of worms (it often doesn't really work on consumer gear), so in this initial experiment I'm just using BIO_FLUSH for that too.