Page MenuHomeFreeBSD

Add atomic_fcmpset_*() inlines for MIPS
ClosedPublic

Authored by kan on Jan 30 2017, 11:56 PM.
Tags
None
Referenced Files
Unknown Object (File)
Nov 15 2024, 5:33 PM
Unknown Object (File)
Nov 13 2024, 4:40 PM
Unknown Object (File)
Oct 18 2024, 3:46 AM
Unknown Object (File)
Sep 24 2024, 4:14 AM
Unknown Object (File)
Sep 18 2024, 6:42 AM
Unknown Object (File)
Sep 12 2024, 12:49 PM
Unknown Object (File)
Sep 7 2024, 10:26 PM
Unknown Object (File)
Aug 6 2024, 11:47 PM
Subscribers
None

Details

Summary

atomic_fcmpset_*() is analogous to atomic_cmpset(), but saves off the
read value from the target memory location into the 'old' pointer.

Requested by: mjg

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

kan retitled this revision from to Add atomic_fcmpset_*() inlines for MIPS .
kan updated this object.
kan edited the test plan for this revision. (Show Details)
kan added reviewers: MIPS, imp, adrian, brooks, br, sgalabov, sson.
kan set the repository for this revision to rS FreeBSD src repository - subversion.

Subject to me being wrong to worry about branch delay slots, this looks fine.

Subject to me being wrong to worry about branch delay slots, this looks fine.

This was copied verbatim from plain cmpset equivalent, should we be worried for them too? If delay slot were to be executed there, I do not see how we can ever end up with %r2 NOT being 0

imp edited edge metadata.

Looks good to my eye. Maybe it's time to use %name instead of %#, but that's a nit.

This revision is now accepted and ready to land.Feb 1 2017, 1:11 AM

It looks like compiled code gets relaxed by the assembler to have correct nops in there. I am not convinced we should be dependent on this behavior in the long run, but pretty much every atomic inline depends on it now, so I think I am going to commit this version.

0xffffffff8020945c <__mtx_lock_flags+232>:      ll      v1,16(s0)
0xffffffff80209460 <__mtx_lock_flags+236>:      bne     v1,v0,0x80209480 <__mtx_lock_flags+268>
0xffffffff80209464 <__mtx_lock_flags+240>:      nop
0xffffffff80209468 <__mtx_lock_flags+244>:      move    v1,a2
0xffffffff8020946c <__mtx_lock_flags+248>:      sc      v1,16(s0)
0xffffffff80209470 <__mtx_lock_flags+252>:      beqz    v1,0x8020945c <__mtx_lock_flags+232>
0xffffffff80209474 <__mtx_lock_flags+256>:      nop
0xffffffff80209478 <__mtx_lock_flags+260>:      j       0x80209488 <__mtx_lock_flags+276>
0xffffffff8020947c <__mtx_lock_flags+264>:      nop
0xffffffff80209480 <__mtx_lock_flags+268>:      sw      v1,24(sp)
0xffffffff80209484 <__mtx_lock_flags+272>:      li      v1,0
This revision was automatically updated to reflect the committed changes.
In D9391#194255, @kan wrote:

It looks like compiled code gets relaxed by the assembler to have correct nops in there. I am not convinced we should be dependent on this behavior in the long run, but pretty much every atomic inline depends on it now, so I think I am going to commit this version.

0xffffffff8020945c <__mtx_lock_flags+232>:      ll      v1,16(s0)
0xffffffff80209460 <__mtx_lock_flags+236>:      bne     v1,v0,0x80209480 <__mtx_lock_flags+268>
0xffffffff80209464 <__mtx_lock_flags+240>:      nop
0xffffffff80209468 <__mtx_lock_flags+244>:      move    v1,a2
0xffffffff8020946c <__mtx_lock_flags+248>:      sc      v1,16(s0)
0xffffffff80209470 <__mtx_lock_flags+252>:      beqz    v1,0x8020945c <__mtx_lock_flags+232>
0xffffffff80209474 <__mtx_lock_flags+256>:      nop
0xffffffff80209478 <__mtx_lock_flags+260>:      j       0x80209488 <__mtx_lock_flags+276>
0xffffffff8020947c <__mtx_lock_flags+264>:      nop
0xffffffff80209480 <__mtx_lock_flags+268>:      sw      v1,24(sp)
0xffffffff80209484 <__mtx_lock_flags+272>:      li      v1,0

Looks good. I think we should adjust the assembler flags to reject code like this and purge it from the tree.