Riscv's pmap uses pvh_global_lock (which is acquired in the beginning of functions which call reclaim_pv_chunk()) and therefore riscv's reclaim function doesn't have a marker system which was taken into use in AMD64 & ARM64 to compensate the removal of pvh_global_lock and to prevent data corruption.
That issue is explained here in old AMD64 commits:
https://github.com/freebsd/freebsd-src/commit/ca1f624517bee47e896c04a01f97d2a4bf55b7a9
https://github.com/freebsd/freebsd-src/commit/ad4e4ae591ec267918ac74bbd158626d4961b0f8
Riscv's reclaim function uses SLIST like other architectures which have SLIST on pmap.
Likewise it uses rwlock lockp like other architectures which have rwlock lockp. The "Avoid deadlock" part is similar to ARM32's code (no marker system) but adds RELEASE_PV_LIST_LOCK(lockp) call.
pc_is_free() call is used (like in amd64 and arm64, unlike in arm32) to make the code more eloquent.
Riscv's pmap_unuse_pt() leads to pmap_wire_sub(1) call like ARM32's pmap_unuse_pt2(). Thus riscv's reclaim function follows ARM32 also by having pmap_wire_add(1) call in the end and "false" as a second argument for vm_page_free_pages_toq()
I used "stress" program to test this reclaim function with VisionFive2.
Without modifications, I only managed to get the system jammed so that it seemed to slow down and get stuck at the point where vm_page_alloc_noobj_domain() and vm_phys_alloc_pages() are constantly being called.
Then I modified a function vm_reserv_reclaim_inactive() in vm_reserv.c to always return false. After this modification reclaim_pv_chunk() function was called at least once and it incremented "freed" variable (at different tests to 7, 8, 168) and returned. The last test with printfs confirmed the execution also visited "Entire chunk is free" block and vm_wire_sub(1) (the latter is elsewhere on pmap.c).
I used stress commands like "protect -i stress --vm 20 --vm-bytes 12846M --timeout 300s" and I had also prevented other processes to be OOM killed by using protect command.