Initially, zfs_getpages() is provided with an array of busy pages by the vnode pager. It then tries to acquire the range lock, but if there is a concurrent zfs_write() running and fails to acquire that range lock, it "unbusies" the pages to avoid a deadlock with zfs_write(). After that, it grabs the pages again and retries to acquire the range lock, and so on.
Once it got the range lock, it filters out valid pages, then copy DMU data to the remaining invalid pages.
The problem is that freshly allocated zero'd pages it grabbed itself are marked as valid. Therefore they are skipped by the second part of the function and DMU data is never copied to these pages. This causes mapped pages to contain zeros instead of the expected file content.
This was discovered while working on RabbitMQ on FreeBSD. I could reproduce the problem easily with the following commands:
git clone https://github.com/rabbitmq/rabbitmq-server.git cd rabbitmq-server/deps/rabbit gmake distclean-ct RABBITMQ_METADATA_STORE=mnesia \ ct-amqp_client t=cluster_size_3:leader_transfer_stream_send
The testsuite fails because there is a sendfile(2) that can happen concurrently to a write(2) on the same file. This leads to sendfile(2) or read(2) (after the sendfile) sending/returning data with zeros, which causes a function to crash.
The patch consists of not setting the VM_ALLOC_ZERO flag when zfs_getpages() grabs pages again. Then, the last page is zero'd if it is invalid, in case it would be partially filled with the end of the file content. Other pages are either valid (and will be skipped) or they will be entirely overwritten by the file content.
This patch was submitted to OpenZFS as openzfs/zfs#17851 which was approved.
Obtained from: OpenZFS
OpenZFS commit: 8a3533a366e6df2ea770ad7d80b7b68a94a81023
MFC after: 3 days