Index: head/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml =================================================================== --- head/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml (revision 49065) +++ head/en_US.ISO8859-1/htdocs/news/status/report-2016-04-2016-06.xml (revision 49066) @@ -1,683 +1,851 @@ April-June 2016
Introduction

The second quarter of 2016.

—Insert name here


Please submit status reports for the third quarter of 2016 by insert date here.

team &os; Team Reports proj Projects kern Kernel arch Architectures bin Userland Programs ports Ports doc Documentation misc Miscellaneous &os; Release Engineering Team &os; Release Engineering Team re@FreeBSD.org &os; 10.3-RELEASE schedule &os; 11.0-RELEASE schedule &os; development snapshots

The &os; Release Engineering Team is responsible for setting and publishing release schedules for official project releases of &os;, announcing code freezes and maintaining the respective branches, among other things.

The &os; Release Engineering Team completed the 10.3-RELEASE cycle late April, led by &a.marius;. The release was one week behind the original schedule, to accommodate for a few last minute critical issues that were essential to include in the final release.

The &os; 11.0-RELEASE cycle started late May, one month behind the original schedule. The schedule slip was primarily to accommodate for packaging the &os; base system with the pkg(8) utility. However, as work on this progressed, it became apparent that there were too many outstanding issues. As a result, packaged base will be a "beta" feature for 11.0-RELEASE, with the goal of promoting it to a first-class feature in 11.1-RELEASE, with additional provisions to ensure a seamless transition for earlier supported releases.

Despite the fact that packaged base is not going to be a prime feature for &os; 11.0-RELEASE, the Release Engineering Team would like to thank everyone who tested, provided patches, provided ideas and feedback, and in some cases, shot themselves in the foot due to bugs.

The &os; Foundation
Obsoleting Rails 3 Torsten Zühlsdorff tz@FreeBSD.org

Ruby on Rails is the base for most of the rubygems in the Ports Collection. Currently, versions 3.2 and 4.2 coexist. Since Rails 3.2 is running out of support, the time has come to switch to 4.2.

While there is ongoing progress to remove Rails 3.2 from the ports tree, there are some major updates blocking this process. The most recent blocker was the outstanding update of www/redmine from 2.6 to 3.2. This has completed successfully, so we can now move on.

To help with porting or testing, feel free to contact me or the ruby@FreeBSD.org mailing list.

ARM Allwinner SoC Support Jared McNeill jmcneill@freebsd.org Emmanuel Vadot manu@freebsd.org Allwinner FreeBSD Wiki

Allwinner SoCs are used in multiple hobbyist devboards and single board computers. Recently, support for these SoCs received many updates.

Theses tasks were completed during the second quarter of 2016:

Ongoing work:

SPI driver LCD Support Any unsupported hardware device that might be of interest.
Robust Mutexes Konstantin Belousov kib@FreeBSD.org Ed Maste emaste@FreeBSD.org

Now that the process-shared locks are implemented for our POSIX threads implementation library, libthr, the only major lacking feature for POSIX compliance is robust mutexes. Robust mutexes allow the application to detect, and theoretically, recover from crashes which occur while modifying the shared state. The supported model is to protect shared state by a pthread_mutex, and the crash is detected as the thread termination while owning the mutex. A thread might terminate alone, or it could be killed due to the termination of the containing process. As such, the robust attribute is applicable to both process-private and -shared mutexes.

An application must be specifically modified to handle and recover from failures. The pthread_mutex_lock() function may return new error EOWNERDEAD, which indicates that the previous owner of the lock terminated while still owning the lock. Despite returning the non-zero value, the lock is granted to the caller. In the simplest form, an application may detect the error and refuse to operate until the persistent shared data is recovered, such as by manual reinitialization. More sophisticated applications could try to automatically recover from the condition, in which case pthread_mutex_consistent(3) must be called on the lock before the unlock. However, such recovery can be considered to be very hard. Still, even the detection of inconsistent shared state is useful, since it avoids further corruption and random faults of the affected application.

It is curious but not unexpected that this interface is not used widely. The only real-life application which utilizes it is Samba. Using Samba with an updated FreeBSD base uncovered minor bugs both in the FreeBSD robustness implementation, and in Samba itself.

It is believed that libthr in FreeBSD 11 is POSIX-compliant for large features. Further work is planned to look at the lock structures inlining to remove overhead and improve performance of the library.

Most of the implementation of the robustness feature consisted of making small changes in the lock and unlock paths, both in libthr and in kern_umtx.c. This literally required reading all of the code dealing with mutexes and conditional variables, which was something I wanted to help future developers with. In the end, with the help of Ed Maste, the man pages for umtx(2) and all thr*(2) syscalls were written and added to the base system's documentation set.

The FreeBSD Foundation Use the implementation in real-word applications and report issues.
EFI Refactoring, GELI Support Eric McCorkle eric@metricspace.net GELI Support Branch EFI Refactoring Branch

The EFI bootloader has undergone considerable refactoring to make more use of the EFI API. The filesystem code in boot1 has been eliminated, and a single codebase for filesystems now serves both boot1 and loader. This codebase is organized around the EFI driver model and it should be possible to export any filesystem implementation as a standalone EFI driver without too much effort.

Both boot1 and loader have been refactored to talk through the EFI_SIMPLE_FILE_SYSTEM interface. In loader, this is accomplished with a dummy filesystem driver that is just a translation layer between the loader filesystem interface and EFI_SIMPLE_FILE_SYSTEM. A reverse translation layer allows the existing filesystem drivers to function as EFI drivers.

The EFI refactoring by itself exists in this branch.

Additionally, GELI support has been added using the EFI refactoring. This allows booting from a GELI-encrypted filesystem. Note that the EFI system partition, which contains boot1, must be a plaintext msdosfs partition. This patch adds an intake buffer to the crypto framework, which allows injection of keys directly into a loaded kernel, without the need to pass them through arguments or environment variables. This patch only uses the intake buffer for EFI GELI support as legacy BIOS GELI support still uses environment variables.

EFI GELI support depends on the efize branch.

These patches have been tested and used and should be able to handle use by early adopters. Note that the LOADER_PATH variable has been changed to /boot/loader.tst, to facilitate safe testing.

IMPORTANT:

As this is an encrypted filesystem patch, an error can potentially leave data inaccessible. It is strongly recommended to use the following procedure for testing:

  1. Back up your data!

  2. Do not forget to back up your data!

  3. Install an EFI shell on the ESP.

  4. Install the patched boot1 on the ESP to something like /boot/efi/BOOTX64.TST.

  5. Install the patched loader to /boot/loader.tst on your machine.

  6. Create a GELI partition outside of the normal boot partition.

  7. First, try booting /boot/efi/BOOTX64.TST and make sure it properly handles the encrypted partition.

  8. Copy a boot environment, including the patched loader, to the encrypted partition.

  9. Use the loader prompt to load a kernel from the encrypted partition.

  10. Try switching over to an encrypted main partition once everything else has worked.

Testing is needed. Code will need review and some style(9) normalization must occur before this code goes into FreeBSD.
Updates to GDB John Baldwin jhb@FreeBSD.org Luca Pizzamiglio luca.pizzamiglio@gmail.com

The port has been updated to GDB 7.11.1.

Support for system call catchpoints has been committed upstream. Support for examining ELF auxiliary vector data via info auxv has been committed upstream. Both features will be included in GDB 7.12.

Figure out why the powerpc kgdb targets are not able to unwind the stack past the initial frame. Add support for more platforms, such as arm, mips, and aarch64, to upstream gdb for both userland and kgdb. Add support for debugging powerpc vector registers. Add support for $_siginfo. Implement info proc commands. Implement info os commands.
VIMAGE Virtualized Network Stack Update Bjoern A. Zeeb bz@FreeBSD.org Projects workspace (all merged to head now).

VIMAGE is a virtualization framework on top of FreeBSD jails that was introduced to the kernel about eight years ago with the vnet virtualized network stack.

Over the last few years, many people started to use VIMAGE in production, production-like setups, and appliances. This adoption increased the urgency to finish the work to avoid panics on network stack teardown and to avoid memory leaks.

The vnet teardown has been changed to be from top to bottom, trying to tear down layer by layer. This is preferable to removing interfaces first and then cleaning everything up, as no more packets could flow. Along with this work, various parts with potential memory leaks were plugged. Lastly, vnet support was added to formerly unvirtualized components, such as the pf and ipfilter firewalls and some virtual interfaces.

The FreeBSD Foundation Please test FreeBSD 11.0-ALPHA6 or later. When reporting a problem, use the vimage keyword in the FreeBSD bug tracker.
IPv6 Promotion Campaign Torsten Zühlsdorff tz@FreeBSD.org Wiki Page

Half a year ago, I started a promotion campaign to improve support for fetching ports via IPv6. Research performed in December, 2015 showed that 10,308 of 25,522 ports are not fetchable when using IPv6-only as these ports ignore the FreeBSD.org pkg mirror.

As a result of the campaign, the following servers now successfully support IPv6:

  1. mirror.amdmi3.ru
  2. vault.centos.org
  3. mirror.centos.org
  4. gstreamer.freedesktop.org
  5. people.freebsd.org

This enables 711 more ports to be fetched via IPv6.

I would like to thank Wolfgang Zenker who is very active in supporting the adoption of IPv6. During the latest RIPE meeting, he brought up the topic of non-support of IPv6 being a hindrance to business. I am hopeful that his talk changed some more minds and will help widen the support of IPv6.

FreeBSD on Hyper-V and Azure Sepherosa Ziehau sepherosa@gmail.com Hongjiang Zhang honzhan@microsoft.com Dexuan Cui decui@microsoft.com Kylie Liang kyliel@microsoft.com FreeBSD Virtual Machines on Microsoft Hyper-V Supported Linux and FreeBSD virtual machines for Hyper-V on Windows

During BSDCan 2016, Microsoft announced the global availability of FreeBSD 10.3 images in Azure. There are many FreeBSD-based Azure virtual appliances in the Azure Marketplace, including Citrix Systems' NetScaler and Netgate's pfSense. Microsoft also made an in-depth technical presentation to introduce how the performance of the Hyper-V network device driver was optimized to reach full line rate on 10Gb networks and achieved decent performance on 40Gb networks. The slides and video from the presentation are available from the BSDCan website.

Microsoft continues to strive to further optimize the performance of Hyper-V network and storage device drivers. Work is ongoing to replace the internal data structure in the LRO kernel API from a singly-linked list to a double-linked list, to speed up the LRO lookup by hash table, and to compare the performance with tcp_lro_queue_mbuf().

The handling of SCSI inquiry in the Hyper-V storage driver is enhanced to make sure disk hotplug and smartctl(8) work reliably. Refer to PR 210425 and PR 209443 for details.

BIS test cases are available on GitHub for Hyper-V and for Azure.

Microsoft
+ + + Ceph on FreeBSD + + + + + Willem Jan + Withagen + + wjw@digiware.nl + + + + + Ceph main site + Main repository + My Fork + The git PULL with all changes + + + +

Ceph is a distributed object store and file system designed + to provide excellent performance, reliability, and + scalability. It provides the following features:

+ +
    +
  1. Object Storage: Ceph provides seamless access to objects + using native language bindings or radosgw, a REST + interface that is compatible with applications written for + S3 and Swift.
  2. + +
  3. Block Storage: Ceph’s RADOS Block Device (RBD) provides + access to block device images that are striped and + replicated across the entire storage cluster.
  4. + +
  5. File System: Ceph provides a POSIX-compliant network file + system that aims for high performance, large data storage, + and maximum compatibility with legacy applications.
  6. +
+ +

I started looking into Ceph as using HAST with CARP and + ggate did not meet my requirements. My primary goal + with Ceph is to run a storage cluster of ZFS storage nodes + where the clients run bhyve on RBD disks stored in Ceph.

+ +

The &os; build process can build most of the tools in + Ceph. However, the RBD-dependent items do not work since + &os; does not yet provide RBD support.

+ +

Since the last quarterly report, the following progress was + made:

+ +
    +
  1. The changeover from using CMake to Automake results in a + much cleaner development environment and better test output. + The changes can be found in the + wip-wjw-freebsd-cmake branch.
  2. + +
  3. Throttling code has been overhauled to prevent live locks. + These mainly occur on &os; but also manifest on Linux.
  4. + +
  5. Fixed a few more tests. On one occasion, I was able to + complete the full test set without errors.
  6. +
+ +

11-CURRENT is used to compile and build test Ceph. The + Clang toolset needs to be at least version 3.7 as Clang 3.4 + does not have all of the capabilities required to compile + everything.

+ +

This setup will get things running for &os;:

+ + + +

Parts Not Yet Included:

+ + + +

Tests Not Yet Included:

+ + + + + + The current and foremost task it to get the test set to + complete without errors. + + Build an automated test platform that will build + ceph/master on &os; and report the results back to + the Ceph developers. This will increase the maintainability + of the &os; side of things as developers are signaled that + they are using Linux-isms that will not compile or run on + &os;. Ceph has several projects that support this: Jenkins, + teuthology, and palpito. But even a + while { compile } loop that reports the build data on + a static webpage is a good start. + + Run integration tests to see if the &os; daemons will work + with a Linux Ceph platform. + + Get the currently excluded Python tests to work. + + Compile and test the user space RBD (Rados Block + Device). + + Investigate if an in-kernel RBD device could be developed + ala ggate. + + Investigate the keystore which currently prevents the + building of Cephfs and some other parts. + + Integrate the &os; /etc/rc.d init scripts in the + Ceph stack for testing and for running Ceph on production + machines. + +