Page MenuHomeFreeBSD

Implement a lock-free way to lookup modules when unwind stack.
Needs ReviewPublic

Authored by howard0su_gmail.com on Nov 22 2015, 1:09 AM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 21 2024, 9:54 PM
Unknown Object (File)
Feb 10 2024, 9:19 PM
Unknown Object (File)
Jan 6 2024, 5:54 AM
Unknown Object (File)
Dec 25 2023, 4:25 AM
Unknown Object (File)
Dec 22 2023, 9:40 PM
Unknown Object (File)
Dec 12 2023, 8:11 PM
Unknown Object (File)
Dec 10 2023, 8:11 PM
Unknown Object (File)
Dec 10 2023, 6:45 PM

Details

Reviewers
markj
gnn
Group Reviewers
Contributor Reviews (src)
Summary

Implement a lock-free way to lookup modules when unwind stack.

The idea is use a SLIST to store all the necessary information regarding to
the modules. In unwind context, the information can be accessed without lock.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 1253
Build 1257: arc lint + arc unit

Event Timeline

howard0su_gmail.com retitled this revision from to Implement a lock-free way to lookup modules when unwind stack..
howard0su_gmail.com updated this object.
howard0su_gmail.com edited the test plan for this revision. (Show Details)

First version for early review. Need more testing and using atomic functions.

sys/arm/arm/unwind.c
154

We need to use atomic_store_rel_ptr to store m->address, otherwise the hardware may reorder the previous stores.

181

I think this is safe without using an atomic function as word sized and aligned writes are atomic.

233–234

There is an issue here if, between reading m->address, and using it, the module we are looking at is unloaded and a new module reuses the entry. I think the following code should be safe. It does assume the module we are interested in doesn't get unloaded from under us. I think this is a safe assumption as, if it is unloaded we will have issues when the thread returns to the stack from that module, it won't have anything to run.

uintptr_t a;

do {
  a = atomic_load_acq_ptr(m->address);
  if (a == 0)
    break;

  start = m->start;
  size = m->size;
  /* Ensure m->start and m->size have been loaded before m->address */
  atomic_thread_fence_acq();
} while (a != m->address);

if (a != 0 && a >= addr && (a + size) <= addr) {
  ...
}
howard0su_gmail.com edited edge metadata.

Implement a lock-free way to lookup modules when unwind stack.

First version for early review. Need more testing and using atomic functions.

howard0su_gmail.com edited edge metadata.

Declare type for malloc(9)
Use kernel edx table if we cannot find a sutable ko match the index.

gnn edited edge metadata.

I agreed with Andrew's comments but once they're addressed this should be good to go in. Always a good idea to add markj@ to these reviews, as I just did. Also, what hardware was this tested on?

This revision is now accepted and ready to land.Dec 9 2015, 4:04 PM
In D4244#93937, @gnn wrote:

I agreed with Andrew's comments but once they're addressed this should be good to go in. Always a good idea to add markj@ to these reviews, as I just did. Also, what hardware was this tested on?

I only tested in my BBB. I have a sh script to keep load/unload several ko files. in another window, i run a dtrace script with a profile-100 probe to print out stack. I don't have access to a SMB ARM box (although I have a JETSON but not able to load BSD to it yet.)

Most of my observations about the code itself are style nits, but generally this solution seems heavyweight. The problem, as I understand it, is that we're currently calling linker_file_foreach from the stack action handler, which is of course unsafe since this may occur in arbitrary contexts.

Similar problems have been solved elsewhere by introducing unlocked variants of linker functions. These are the linker_*_ddb() functions, which are misnamed since they're called from places other than DDB (e.g., witness(4)). It might be that we just want to add a linker_file_foreach_unlocked() or so and use that instead.

This is unsafe since a KLD may be unloaded as we're iterating over the linker file list, but the approach here is racy in the same way, isn't it?

sys/arm/arm/unwind.c
119–125

These should be static.

133

The indentation on this line and the next looks wrong.

146

This should be declared at the beginning of the function; the first argument should be sizeof(*newentry).

233–234

I don't really agree that this is a safe assumption: the kldunload_try event fires before any module unload/shutdown/quiesce handlers are called. At this point, threads may still be validly executing code within the module.

This might be addressed by using the kldunload event handler rather than kldunload_try; I'm not sure why the latter is being used here, since it never tries to veto the unload.

This patch also try to improve performance in ARM unwind. As you can read from the code, every *frame* unwind needs to get access to _idx_start/end. So lookup symbol table of each module adds lots of CPU time to do so. There are two approaches, 1 is what you can see in this change. 2 is stored idx_start/idx_end in linker_file structure. I finally choose 1 due to the concern adding MD code into kernel linker.

I will change to unload_try from unload. I didn't think about that.

Since this function is not only used in DDB, but also in dtrace. it needs to be production env ready. To complete grantee safe, we may need a spinlock to protected between unwind and unload_try?

Thoughts?

Seems like a reasonable smoke test to me. Please make the requested changes and commit.

This patch also try to improve performance in ARM unwind. As you can read from the code, every *frame* unwind needs to get access to _idx_start/end. So lookup symbol table of each module adds lots of CPU time to do so. There are two approaches, 1 is what you can see in this change. 2 is stored idx_start/idx_end in linker_file structure. I finally choose 1 due to the concern adding MD code into kernel linker.

Hm, I see. I'll take a closer look at that. (I know nothing about stack unwinding on ARM.)

I will change to unload_try from unload. I didn't think about that.

Since this function is not only used in DDB, but also in dtrace. it needs to be production env ready. To complete grantee safe, we may need a spinlock to protected between unwind and unload_try?

Something like that. At the moment, the code isn't SMP-safe since the unwinder may access an element of your module list as it's being freed by another thread. The problem with using a spin mutex (or any kind of lock) here is that you may get recursion if you do something like lockstat:::spin-acquire {stack();}. I don't really see a good solution for this problem other than to completely prohibit KLD unloads while DTrace probes are enabled. We already do that selectively: see the nenabled field in struct linker_file.

In D4244#94248, @markj wrote:

This patch also try to improve performance in ARM unwind. As you can read from the code, every *frame* unwind needs to get access to _idx_start/end. So lookup symbol table of each module adds lots of CPU time to do so. There are two approaches, 1 is what you can see in this change. 2 is stored idx_start/idx_end in linker_file structure. I finally choose 1 due to the concern adding MD code into kernel linker.

Hm, I see. I'll take a closer look at that. (I know nothing about stack unwinding on ARM.)

ustack() of dtrace implemention is still missing in ARM due to similar unwind problem. And it is even worse in userland. we cannot get unwinding table (reliable, trustful or even impossible if binary doesn't contain it) as kernel did. I am looking at some alternative solution like http://www.mcternan.me.uk/ArmStackUnwinding/. Just FYI not relate to this change.

Something like that. At the moment, the code isn't SMP-safe since the unwinder may access an element of your module list as it's being freed by another thread. The problem with using a spin mutex (or any kind of lock) here is that you may get recursion if you do something like lockstat:::spin-acquire {stack();}. I don't really see a good solution for this problem other than to completely prohibit KLD unloads while DTrace probes are enabled. We already do that selectively: see the nenabled field in struct linker_file.

You are absolute right. Since this will be called from probe context, we should never use any mutex even spinmutex. I will look at nenabled further.

This revision now requires review to proceed.Dec 10 2015, 12:25 PM

I will run my smoking tests tonight again.

Tonight's testing didn't go very well. I hit a crash.
FSR=00000007, FAR=fffffff8, spsr=20000193
r0 =de3abb14, r1 =00000001, r2 =00000483, r3 =de3abb24
r4 =8483b0b0, r5 =00000000, r6 =c0751cc4, r7 =fffffff8
r8 =00000084, r9 =00000084, r10=00000001, r11=de3abb5c
r12=c07641cc, ssp=de3ab97c, slr=de3abb60, pc =c06493f8

[ thread pid 970 tid 100074 ]
Stopped at unwind_stack_one+0x3cc: ldr r6, [r7], #0x004
db>

/* Load the registers */
for (reg = 4; mask && reg < 16; mask >>= 1, reg++) {
        if (mask & 1) {

3c4: e3120001 tst r2, #1 ; 0x1
3c8: 0a000007 beq 3ec <unwind_stack_one+0x3ec>

state->registers[reg] = *vsp++;

3cc: e4976004 ldr r6, [r7], #4

state->update_mask |= 1 << reg;

/* If we have updated SP kep its value */
if (reg == SP)

3d0: e3550009 cmp r5, #9 ; 0x9

Which is not necessary related to this change. But I will take a look further.

howard0su_gmail.com edited edge metadata.

more style(9) fix, fix unload prototype.

sys/arm/arm/unwind.c
153

Should be a /* ... */ comment

166

This is unsafe as SLIST_INSERT_HEAD doesn't have any memory ordering guarantee. We need to ensure the above entry is observable and the next pointer is valid before setting the new head.

247

If the statement doesn't fit on one line the second line should be indented 2 tabs followed by 4 spaces.

howard0su_gmail.com added inline comments.
sys/arm/arm/unwind.c
153

i will fix this.

166

this is done in scope of mtx_modules. unload will not happen here.

sys/arm/arm/unwind.c
174

I'm not concerned about unload, my worry is with find_index. It iterates over the list without the lock however if the head was updated before the next pointer it may only see a single item in the list.

To fix this we would need to manually write the two items with a barrier after setting the next pointer, but before setting the new head.

howard0su_gmail.com marked an inline comment as done.

Address review comments to use atomic(9) to update SLIST_HEAD