It's reported on the mailing list. On 32-bit platforms, fsck dies with a malloc of size (0xa5a5a5a5) because the sblock isn't initialized yet by sbread(), so on HEAD it is filed with malloc junk. This results in an error as malloc() fails. On 64-bit platforms, it's still malloc junk, but the size is no longer preposterous, so malloc() dutifully tries to allocate that much data (and due to junk filling, tries to touch it all) and eventually runs the system out of swap. On stable systems these are probably resulting in calls to malloc(0) and subsequent buffer overflows.
On most 64-bit systems it probably works, as it's "only" ~2.8 GiB of data being allocated. It'll be slow and bloated, but most people's primary systems can easily handle that. It's only once you get to small VMs, or emulating in QEMU, that you start to run out of memory and swap. Presumably the original patch was tested on a system that had enough memory to cope.
I reverted the breaking commits until a solution is found.
jrtc27 summed it up pretty well.
I only tested the original patch on 64-bit and the malloc() call was succeeding because there was ample memory.
Had I tested this on 32-bit, I would have caught it during testing - my fault.
If you're doing that, why not just make bufinit pass MAXBSIZE to malloc instead rather than initialising the in-memory superblock with a dummy value? The latter seems dangerous, it's nice to be able to have malloc junk catch things that read parts of the superblock that haven't been read from disk yet.