- <td> Critical sections prevent preemption of a thread on a CPU, as
- well as preventing migration of that thread to another CPU, and
- maybe used for synchronizing access to per-CPU data structures, as
- well as preventing recursion in interrupt processing. Currently,
- critical sections disable interrupts on the CPU. In previous
- versions of FreeBSD (4.x and before), optimizations were present
- that allowed for software interrupt disabling, which lowers the
- cost of critical sections in the common case by avoiding expensive
- microcode operations on the CPU. By restoring this model, or a
- variation on it, critical sections can be made substantially
- cheaper to enter. In particular, this change lowers the cost
- of critical sections on UP such that it is approximately the same
- cost as a mutex, meaning that optimizations on SMP to use critical
- sections instead of mutexes will not harm UP performance. This
- change has now been committed, and appeared in 6.0-RELEASE. </td>
- </tr>
-
- <tr>
- <td> Normalize socket and protocol control block reference model </td>
- <td> &a.rwatson; </td>
- <td> 20060401 </td>
- <td> &status.done; </td>
- <td> The socket/protocol boundary is characterized by a set of data
- structures and API interfaces, where the socket code acts as both
- a consumer and a service library for protocols. This task is to
- normalize the reference model by which protocol state is attached
- to and detached from socket state in order to strengthen
- invariants, allowing the removal of countless unused code paths
- (especially error handling), the removal of unnecessary locking
- in TCP, and a general improve the structure of the code. This
- serves both the immediate purpose of improving the quality and
- performance of this code, as well as being necessary for future
- optimization work. These changes have been prototyped in
- Perforce, and now merged to 7-CURRENT. They will be merged into
- RELENG_6 once they have been thoroughly tested.</td>
- </tr>
-
- <tr>
- <td> Add true inpcb reference count support </td>
- <td> &a.mohans;, &a.rwatson;, &a.peter; </td>
- <td> 20081208 </td>
- <td> &status.done; </td>
- <td> Historically, the in-bound TCP and UDP socket paths relied on
- global pcbinfo info locks to prevent PCBs being delivered to from
- being garbage collected by another thread while in use. This set
- of changes introduces a true reference model for PCBs so that the
- global lock can be released during in-bound process, and appear
- in 8.0-RELEASE.</td>
- </tr>
-
- <tr>
- <td> Fine-grained locking for UNIX domain sockets </td>
- <td> &a.rwatson; </td>
- <td> 20070226 </td>
- <td> &status.done; </td>
- <td> UNIX domain sockets in FreeBSD 5.x and 6.x use a single global
- subsystem lock. This is sufficient to allow it to run without
- Giant, but results in contention with large numbers of processors
- simultaneously operating on UNIX domain sockets. This task
- introduced per-protocol control block locks in order to reduce
- contention on a larger subsystem lock, and the results appeared in
- 7.0-RELEASE. </td>
- </tr>
-
- <tr>
- <td> Multiple netisr threads </td>
- <td> &a.rwatson; </td>
- <td> 20090601 </td>
- <td> &status.done; </td>
- <td> Historically, the BSD network stack has used a single network
- software interrupt context, for deferred network processing. With
- the introduction of multi-processing, this became a single
- software interrupt thread. In FreeBSD 8.0, multiple netisr
- threads are now supported, up to the number of CPUs present in the
- system.</td>
- </tr>
-
- </table>
-
- <a name="cluster"></a>
- <h2>Netperf Cluster</h2>
-
- <p>Through the generous donations and investment of Sentex Data
- Communications, FreeBSD Systems, IronPort Systems, and the FreeBSD
- Foundation, a network performance testbed has been created in Ontario,
- Canada for use by FreeBSD developers working in the area of network
- performance. A similar cluster, made possible through the generous
- donation of Verio, is being prepared for use in more general SMP
- performance work in Virginia, US. Each cluster consists of several SMP
- systems inter-connected with giga-bit ethernet such that relatively
- arbitrary topologies can be constructed in order to test host-host, IP
- forwarding, and bridging performance scenarios. Systems are network
- booted, have serial console, and remote power, in order to maximize
- availability and minimize configuration overhead. These systems are
- available on a check-out basis for experimentation and performance
- measurement to FreeBSD developers working on the Netperf project, and
- in related areas.</p>
-
- <p><a href="cluster.html">More detailed information on the netperf
- cluster can be found by following this link.</a></p>
-
- <a name="papers"></a>
- <h2>Papers and Reports</h2>
-
- <p>The following paper(s) have been produced by or are related to the
- Netperf Project:</p>
-
- <ul>
- <li><p><a href="http://www.watson.org/~robert/freebsd/netperf/20051027-eurobsdcon2005-netperf.pdf">"Introduction to Multithreading and Multiprocessing in the FreeBSD SMPng Network Stack", EuroBSDCon 2005, Basel, Switzerland</a>.</p></li>
- </ul>
-
- <a name="links"></a>
- <h2>Links</h2>
-
- <p>Some useful links relating to the netperf work:</p>
-
- <ul>
- <li><p>SMPng Project -- Project to introduce
- finer grained locking in the FreeBSD kernel.</p></li>