Page MenuHomeFreeBSD

Implement kernel code coverage (kcov)
ClosedPublic

Authored by andrew on Mar 6 2018, 8:37 PM.
Tags
None
Referenced Files
F105955493: D14599.id40008.diff
Mon, Dec 23, 2:08 AM
Unknown Object (File)
Wed, Dec 11, 2:27 AM
Unknown Object (File)
Tue, Dec 10, 5:36 PM
Unknown Object (File)
Sun, Dec 1, 10:50 AM
Unknown Object (File)
Mon, Nov 25, 8:04 AM
Unknown Object (File)
Sun, Nov 24, 9:40 AM
Unknown Object (File)
Nov 13 2024, 8:17 AM
Unknown Object (File)
Nov 10 2024, 12:57 PM

Details

Summary

This patch implements a basic code coverage mechanism into
the kernel, to be used for randomized input testing.

Currently, only a single coverage mode is supported (trace-pc).
When enabled, the compiler will insert a call to the coverage
function at each edge, which will record the PC. This
data can later be consumed to trace a path taken through a
system call. Other coverage modes may be added in the future.

The intent is to provide the coverage collection functionality
required to support the kernel fuzzing utility syzkaller,
which makes use of this coverage interface to perform randomized
input testing on system calls, ultimately to expose bugs in the
kernel.

It provides a character device to interface with the coverage data,
and currently only supports coverage collection for one thread at
a time.

The patch is modeled after the analogous commit to the Linux kernel,
which can be viewed here: https://lwn.net/Articles/671640/

See also the clang documentation for trace-pc coverage:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-pcs

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
sys/kern/kern_kcov.c
186–194 ↗(On Diff #48100)

I'm assuming this is the incrementing of the count for this location?
If not this storage should probably be per-cpu or use something like counter(9).
Using a handrolled kernel instrumentation at work we've had severe cpu caching performance hits due to using global memory for counters everywhere.

andrew added inline comments.
sys/kern/kern_kcov.c
186–194 ↗(On Diff #48100)

It's a per-thread buffer. The first item indicates the number of records. It will be used on a single CPU up until the scheduler moves it. It is expected userspace will normally read it from the same CPU as this will normally happen within the same thread, however this isn't a hard requirement.

I understand the main objective with this change is to enable syzkaller, but is Clang's coverage sanitizer the only way to implement KCOV for this purpose? I ask because we have (and use) Clang's SBCC to get coverage data out of the kernel. It would be nice to not have two KCOV implementations. :)
--ap

In D14599#377439, @alexander_thequery.net wrote:

I ask because we have (and use) Clang's SBCC to get coverage data out of the kernel.

Who's "we" in this context, and do you have a link to the work?

Not sure how I ended up removing subscribers -- readded.

We is NetApp; we use this internally to gather code coverage metrics as well as input to profile guided optimization of the kernel. It is not public, but I think we can push it upstream if there is interest.

The patch works in tandem with Clang's compiler_rt profile library, a subset of which is compiled into a kernel library and linked with the kernel/kmods that need coverage data.

I don't see any reason why this would preclude SBCC based kernel coverage. The only issue may be if it uses the same device node to communicate with userspace.

I expect we could add the SBCC interface to the code, although I don't know what it requires from the runtime.

I am not saying these cannot coexist, just that SBCC doesn't require FreeBSD to maintain its own trace implementations (i.e. all the __sanitizer_cov_trace* routines). Again, I do not know if SBCC can be used for syzkaller; if it cannot, then this whole discussion is moot. But if we can leverage SBCC, it would be worth exploring.
--ap

Is it possible to make progress? I would really love to see a kcov variant committed which can be used by syzkaller...

I was trying to use this patch with syzkaller, but couldn't get it running. To figure out what is going on, I created a small test program,

, which results in a reboot of the system. When changing the code such that the ioctl(fd, KIODISABLE, mode) command is executed, to reboot happens. I think the system should not reboot...

Add tests and fix the code based on issues found by these

I was trying to use this patch with syzkaller, but couldn't get it running. To figure out what is going on, I created a small test program,

, which results in a reboot of the system. When changing the code such that the ioctl(fd, KIODISABLE, mode) command is executed, to reboot happens. I think the system should not reboot...

Can you try with the new patch. There were some issues I found with the tests that should be fixed now.

I was trying to use this patch with syzkaller, but couldn't get it running. To figure out what is going on, I created a small test program,

, which results in a reboot of the system. When changing the code such that the ioctl(fd, KIODISABLE, mode) command is executed, to reboot happens. I think the system should not reboot...

Can you try with the new patch. There were some issues I found with the tests that should be fixed now.

Sure, I will. It will take a day or two... Thanks for updating the patch!

I was trying to use this patch with syzkaller, but couldn't get it running. To figure out what is going on, I created a small test program,

, which results in a reboot of the system. When changing the code such that the ioctl(fd, KIODISABLE, mode) command is executed, to reboot happens. I think the system should not reboot...

Can you try with the new patch. There were some issues I found with the tests that should be fixed now.

The system doesn't panic anymore. That is great.

Do you have syzkaller running with coverage enabled? I fixed some obvious issues (Commit), but still can't get syzkaller working with coverage enabled yet... I just compiled the kernel in the qemu instance with your patch applied...

sys/kern/kern_kcov.c
124 ↗(On Diff #52334)

Use KCOV_MAXSIZE instead of KCOV_MAXENTRIES here, too.

sys/sys/kcov.h
38 ↗(On Diff #52334)

Isn't it better to call it KCOV_MAXSIZE, since you use it as the size of the buffer, not the number of entries. Linux uses there a number of entries, so it might be good to be very clear in the name.

I fixed a some other small issue in syzkaller and now got it in a state, where the coverage information is collected initially, but then stops working. The reason, I think, is based on the following observation, how the state machines work.
When a thread exits, the following happens:

  • On Linux, the node goes back to KCOV_MODE_INIT state and gets freed, if there are no ref counts.
  • On OpenBSD, the node goes back to KCOV_STATE_READYstate, if it is not in KCOV_STATE_DYING state.
  • On FreeBSD, the node goes to KCOV_STATE_DYING state, if it is not in that state.

So on Linux and OpenBSD you can use ioctl(..., KIOENABLE, ...) on a node, where the thread exited. This does not work on FreeBSD. syzkaller makes use of this...
@andrew Are you planning to change the state machine used by FreeBSD to allow this way of operation or should I try to change syzkaller?

sys/kern/kern_kcov.c
508 ↗(On Diff #52334)

Changing

if (info->state != KCOV_STATE_READY) {

to

if ((info->state != KCOV_STATE_READY) &&
    (info->state != KCOV_STATE_DYING)) {

allows syzkaller to work. Basically, allowing iotcl(..., KIOENABLE, ...) to succeed after the thread exited.
One could also add another state to indicate that the thread exited and allow a state transition from
that to to KCOV_STATE_RUNNING via iotcl(..., KIOENABLE, ...).

Thank you for picking this up, I've finally had the chance to look through this whole thing. A couple of small nitpicks but otherwise looks good to me.

Also Andrew, I saw you mention in another thread that you had a man page started but not quite ready. If you'd like help reviewing or finishing it let me know, as I had started a basic one locally myself.

sys/amd64/conf/GENERIC
105 ↗(On Diff #52334)

I assume this is just to show the option, as we wouldn't want to add it to the config by default.

sys/conf/kern.pre.mk
131 ↗(On Diff #52334)

Is this supposed to be here twice?

sys/kern/kern_kcov.c
59 ↗(On Diff #52334)

This include is (probably) no longer needed.

241 ↗(On Diff #52334)

This section would be a little clearer if we named the constant. Something like #define KCOV_CMP_ITEM_SIZE 4? It could also be used in user code for iterating through the buffer.

The interfaces are similar but not identical - for example I chose to expose the number of buffer items as an ioctl rather than at the beginning of the buffer as they do in the Linux patch. Conveniently, it is not required that they match exactly as syzkaller has specific executor code per OS which we can adjust to fit this one.

I've been asked to have code coverage support as part of integrating FreeBSD in to ZFS upstream. This is generically for examining the coverage provided by ZTS. I anticipate coverage gaps with Linux that I'd like to put a number on. Thus I'll need something much more generic than just what syzkaller needs.

We talked about implementing mmap(), but may leave that as a future performance improvement since using read() should work just as well for this application.

I don't think that will suffice for our needs. Could you break this out from sanitizer coverage or should I fork the review to make it general purpose?

Update based on feedback:

  • Remove duplicate code from kern.pre.mk
  • Switch to number of entries when setting the buffer size
  • Remove the unneeded kcov_thread_exit
  • Return to KCOV_STATE_READY on thread exit to allow other threads to use the fd

Also Andrew, I saw you mention in another thread that you had a man page started but not quite ready. If you'd like help reviewing or finishing it let me know, as I had started a basic one locally myself.

I think now I'll commit without the manpage. We can then work on one based on the final interface.

sys/amd64/conf/GENERIC
105 ↗(On Diff #52334)

I expect to comment it out in the final patch

sys/conf/kern.pre.mk
131 ↗(On Diff #52334)

Fixed. It was from a mismerge.

sys/kern/kern_kcov.c
124 ↗(On Diff #52334)

The size is now in terms of entries, where an entry is sizeof(uint64_t) sized.

508 ↗(On Diff #52334)

I've changed the logic to return to KCOV_STATE_READY on thread exit.

sys/sys/kcov.h
38 ↗(On Diff #52334)

Userspace now works in terms of entries when allocating the buffer. I can add KCOV_ENTRIY_SIZE if we think it's useful.

Also Andrew, I saw you mention in another thread that you had a man page started but not quite ready. If you'd like help reviewing or finishing it let me know, as I had started a basic one locally myself.

I think now I'll commit without the manpage. We can then work on one based on the final interface.

Great, go ahead. Once it is in the tree, I'll make the corresponding changes to syzkaller.

sys/kern/kern_kcov.c
576 ↗(On Diff #52637)

Remove/update this comment since you removed the function.

sys/kern/kern_thread.c
543 ↗(On Diff #52637)

You should remove this call.

tests/sys/kern/kcov.c
250 ↗(On Diff #52637)

Small typo.

andrew marked 4 inline comments as done.
  • Fix a comment
  • Remove unneeded code from kern_thread.c
  • Fix a typo in a test
sys/kern/kern_kcov.c
576 ↗(On Diff #52637)

I've updated it as it still holds, just not with the given function name.

Remove the kcov_thread_exit prototype from <sys/kcov.h>

sys/kern/kern_kcov.c
153 ↗(On Diff #52645)

Why are the results of this check valid after the check ?

569 ↗(On Diff #52645)

What is the purpose of this (and all other) seq_cst() fences in the patch ?

sys/kern/kern_kcov.c
153 ↗(On Diff #52645)

How do you mean?

The code checks if kcov is running on a given thread. This is only used in the functions that handle tracing so td == curthread.

There is nothing stopping another thread to disable tracing just after this check, however I've tried to be careful to allow for this in the code by only freeing the buffer in specific safe places.

569 ↗(On Diff #52645)

It's to ensure the state has been updated, even in the presence of interrupts. It may not be needed in all cases, however I've decided to be safe for now.

sys/kern/kern_kcov.c
153 ↗(On Diff #52645)

What prevents other thread from unmapping the buffer after we checked that the state is RUNNING ?

569 ↗(On Diff #52645)

This is not how fences work. They only order actions and do not guarantee completion. Also, you need the residual fence on the reader side, between accesses to td_kcov_info and state (note the residual order as well), for this fence to have any effect.

sys/kern/kern_kcov.c
153 ↗(On Diff #52645)

We need to both unmap the buffer and exit the thread before freeing the buffer. The kernel memory is not allowed to be freed by another thread unless the thread has exited as it doesn't know if the buffer is still in use.

The only places we can free the kernel buffer are in kcov_thread_dtor and kcov_mmap_cleanup. Both of these check if the state is DYING before freeing the buffer.

There is a bug here where we may not free the buffer if the thread exits before munmap is called that should be fixed.

Add a missing atomic_thread_fence_seq_cst call to get_kinfo

sys/kern/kern_kcov.c
154 ↗(On Diff #52649)

I do not understand what seq_cst() does there at all. If you see NULL info you cannot dereference it at all.

But if you see it non-NULL, there is no guarantee that a parallel thread would not change state after we checked it. Or I missed a mechanism which would prevent this. Why cannot destructor proceed while we are executing trace_cmp() ?

sys/kern/kern_kcov.c
154 ↗(On Diff #52649)

There is nothing stopping another thread from changing it after this check. The rest of the code is written such that if this does happen trace_cmp and __sanitizer_cov_trace_pc are safe. We may get extra entries, but these are for userspace to deal with by trying to trace a non-local thread.

sys/kern/kern_kcov.c
154 ↗(On Diff #52649)

What guarantees the safety ? Why the buffer cannot be unmapped from the KVA ?

Rework the cleanup code:

  • Use barriers to ensure ordering between info->state and other info data.
  • Check if a thread is using the info struct now we don't enter DYING on thread exit
sys/kern/kern_kcov.c
154 ↗(On Diff #52649)

I've updated to try be more explicit.

It requires a thread to stop using the info struct, either by thread exit or, from the thread running being traced, by calling KIODISABLE. It also requires userspace to stop using the buffer and close the fd. When both of these are the case we can free the buffer. As these can happen in either order the code to handle this is in the devfs cdevpriv dtor and the thread dtor.

sys/kern/kern_kcov.c
370 ↗(On Diff #52651)

Why this cannot be store_rel ?

373 ↗(On Diff #52651)

You can move unlock before if() ? Why the spinlock is needed ?

554 ↗(On Diff #52651)

This probably could be fence_rel()

sys/kern/kern_thread.c
543 ↗(On Diff #52651)

There is probably some issue with phabricator or patch. In this chunk, there is no diff. I suspect that it should be a call to kcov_thread_dtor ?

sys/kern/kern_kcov.c
370 ↗(On Diff #52651)

The fence is to ensure the ordering when moving from RUNNING to another state. It's there so, if an interrupt happens, the store to leave the RUNNING state is ordered and observed on the same CPU before any further changes to the info struct. A store_rel would only ensure prior loads and stores are observed before the store, not later stores.

373 ↗(On Diff #52651)

The lock id protecting reading the thread pointer, e.g. one thread is closing the fd while another is exiting. I could change it to

struct thread *thread;
...
thread = info->thread;
mtx_unlock_spin(&kcov_lock);
if (thread != NULL)
    return;
554 ↗(On Diff #52651)

Does this stop later loads/stores to be observed before the atomic store?

sys/kern/kern_thread.c
543 ↗(On Diff #52651)

There was previously a change here, and a comment above. I changed it to use the thread_dtor EVENTHANDLER.

sys/kern/kern_kcov.c
370 ↗(On Diff #52651)

This is very weird requirement, for later stores to not be observed before specified one. I am aware of only one situation where such arrangement is needed, and I really do not see how it happens that you need it for indicating consistent state of the structure.

That said, I mean atomic_thread_fence_rel(), and it guarantees (r,w|w) barrier.

Can you add a comment to the code, explaining the kcov_info lifecycle. Esp. please put details about which context is allowed to destroy kcov_info, and how it is ensured that the coverage code in the context of the measured thread never access dangled td_kcov_info or unmapped buffer pointed to by kinfo.

Simplify the locking in kcov_mmap_cleanup
Update the comments to describe the state machine
Add access requirements to struct kcov_info

sys/sys/kcov.h
38 ↗(On Diff #52334)

I think it would be useful. Please add it...

Add KCOV_ENTRIY_SIZE
While here update the header copyright

sys/sys/kcov.h
44 ↗(On Diff #52732)

Shouldn't it be KCOV_ENTRY_SIZE instead of KCOV_ENTRIY_SIZE?

Would it make sense to use sizeof(uint64_t) instead of 8?

Fix the spelling of KCOV_ENTRY_SIZE

This revision is now accepted and ready to land.Jan 12 2019, 1:09 AM
This revision was automatically updated to reflect the committed changes.