User Details
- User Since
- May 16 2014, 7:35 PM (610 w, 1 d)
Today
Remove nop loop.
Fixes by adrian.
Man page fixes
Some fixes, mostly to the man page typos.
Also accept that spawnattrs might be NULL.
Yesterday
One more #ifdef SMP in sched_ule.c
Add missed sys/sched.h include into powerpc/machdep.c.
Deduplicate sched stats, sdt probes, and kdtrace hooks helper vars.
At least amd64 GENERIC and LINT build.
One more #ifdef KTR for 4bsd.
Take the DEFINE_SHIM() proposal.
Fix inlined strcmp().
Also hopefully fix compilation issues, but I only started tinderbox.
Fri, Jan 23
Add function-pointers based workaround for risvc and arm.
Hopefully at least riscv would grow ifuncs in some time frame.
Is this the only place where the update of the cache under the shared vnode lock occurs?
Or even better, can you point me to the single entry point of the cache update code?
Rename sched_instance_name variable to sched_name.
Add sched_instance_select() call to all arches.
Rename the tunable to kern.sched.name.
Automatically fall back to some scheduler if the named one is not found, and there is one.
Handle all arches for cpu_switch.S.
Make kern.sched sysctls working when 4BSD is selected:
sysctl kern.sched.ule.topology_spec: allow to run if ULE is not initialized
sched_shim: restore kern.ccpu sysctl
It is apparently should be considered part of the ABI, and is used by the base top(1). But do not declare the ccpu variable in headers, it is needed only by 4bsd. So put the variable definition into sched_shim.c to make the kernel buildable without SCHED_4BSD.
Thu, Jan 22
Remove comment.
Add __ktrace_used to 'tv' decls,
Upload the right diff.
So I went ahead and implemented the third (?) approach. IMO it is the only safe option there.
If you really dislike this approach, I think a workable solution that also works for msleep_spin() is the following.
Actually commit the comment before diffing.
Rename to ktrace_mtx_unlock(), add explanation.
I only tried to boot into ULE so far.
There are two MD places that need to be handled for each arch, but I only did that for amd64 right now:
- cpu_switch() made unconditionally wait for blocked thread lock unblock
- sched_instance_select() must be called before ifuncs are resolved.
Ok, my reluctance to this documentation change is because such promise, of never returning specific error code, is very hard to fulfill. For instance, on intr NFS mount, VOP_GETATTR() can return EINTR, and so on.
In summary: defined in the comments as a function tWo arguments I believe.
