On amd64, these changes reduce the size of the compiled lookup_ge and lookup_le routines by 112 bytes each.
A buildworld test suggests that this change speeds up lookup_le calls by 21.6%, and slows lookup_ge calls by 4.4%.
original timing:
52450.199u 1452.606s 59:09.38 1518.6% 73478+3095k 121089+33120io 110807pf+0w
vm.radix.le_cycles: 74538053872
vm.radix.le_calls: 341220107
le_cycles/call: 218.44566701340375
vm.radix.ge_cycles: 2167673543
vm.radix.ge_calls: 3540959
ge_cycles/call: 612.1713194081038
modified timing:
52599.911u 1454.881s 59:17.88 1519.2% 73479+3096k 121017+34297io 110758pf+0w
vm.radix.le_cycles: 58436422956
vm.radix.le_calls: 341219862
le_cycles/call: 171.25738992298167
vm.radix.ge_cycles: 2264687629
vm.radix.ge_calls: 3544083
ge_cycles/call: 639.0052459268026
(/ 171.25738992298167 218.44566701340375)
0.7839816292280742
(/ 639.0052459268026 612.1713194081038)
1.0438340145445624
Peter, can you test this?