diff --git a/documentation/content/en/books/handbook/virtualization/_index.adoc b/documentation/content/en/books/handbook/virtualization/_index.adoc --- a/documentation/content/en/books/handbook/virtualization/_index.adoc +++ b/documentation/content/en/books/handbook/virtualization/_index.adoc @@ -493,16 +493,13 @@ == FreeBSD as a Host with bhyve The bhyve BSD-licensed hypervisor became part of the base system with FreeBSD 10.0-RELEASE. -This hypervisor supports a number of guests, including FreeBSD, OpenBSD, and many Linux(R) distributions. +This hypervisor supports a number of guests, including FreeBSD, OpenBSD, many Linux(R) distributions, and Microsoft Windows(R). By default, bhyve provides access to serial console and does not emulate a graphical console. Virtualization offload features of newer CPUs are used to avoid the legacy methods of translating instructions and manually managing memory mappings. The bhyve design requires a processor that supports Intel(R) Extended Page Tables (EPT) or AMD(R) Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT). Hosting Linux(R) guests or FreeBSD guests with more than one vCPU requires VMX unrestricted mode support (UG). -Most newer processors, specifically the Intel(R) Core(TM) i3/i5/i7 and Intel(R) Xeon(TM) E3/E5/E7, support these features. -UG support was introduced with Intel's Westmere micro-architecture. -For a complete list of Intel(R) processors that support EPT, refer to https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&0_ExtendedPageTables=True[]. -RVI is found on the third generation and later of the AMD Opteron(TM) (Barcelona) processors. + The easiest way to tell if a processor supports bhyve is to run `dmesg` or look in [.filename]#/var/run/dmesg.boot# for the `POPCNT` processor feature flag on the `Features2` line for AMD(R) processors or `EPT` and `UG` on the `VT-x` line for Intel(R) processors. [[virtualization-bhyve-prep]] @@ -516,7 +513,7 @@ # kldload vmm .... -Then, create a [.filename]#tap# interface for the network device in the virtual machine to attach to. +There are several ways to connect a virtual machine guest to a host's network; one straightforward way to accomplish this is to create a [.filename]#tap# interface for the network device in the virtual machine to attach to. In order for the network device to participate in the network, also create a bridge interface containing the [.filename]#tap# interface and the physical interface as members. In this example, the physical interface is _igb0_: @@ -545,19 +542,34 @@ [source,shell] .... -# fetch https://download.freebsd.org/releases/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-bootonly.iso -FreeBSD-13.1-RELEASE-amd64-bootonly.iso 366 MB 16 MBps 22s +# fetch https://download.freebsd.org/releases/ISO-IMAGES/14.0/FreeBSD-14.0-RELEASE-amd64-bootonly.iso +FreeBSD-14.0-RELEASE-amd64-bootonly.iso 426 MB 16 MBps 22s +.... + +FreeBSD comes with an example script `vmrun.sh` for running a virtual machine in bhyve. +It will start the virtual machine and run it in a loop, so it will automatically restart if it crashes. +`vmrun.sh` takes a number of options to control the configuration of the machine, including: + +* `-c` controls the number of virtual CPUs, +* `-m` limits the amount of memory available to the guest, +* `-t` defines which [.filename]#tap# device to use, +* `-d` indicates which disk image to use, +* `-i` tells bhyve to boot from the CD image instead of the disk, and +* `-I` defines which CD image to use. + +The last parameter is the name of the virtual machine and used to track the running machines. +You can use the following command to get a list of all available program argument options: + +[source,shell] +.... +# sh /usr/share/examples/bhyve/vmrun.sh --usage .... -FreeBSD comes with an example script for running a virtual machine in bhyve. -The script will start the virtual machine and run it in a loop, so it will automatically restart if it crashes. -The script takes a number of options to control the configuration of the machine: `-c` controls the number of virtual CPUs, `-m` limits the amount of memory available to the guest, `-t` defines which [.filename]#tap# device to use, `-d` indicates which disk image to use, `-i` tells bhyve to boot from the CD image instead of the disk, and `-I` defines which CD image to use. -The last parameter is the name of the virtual machine, used to track the running machines. This example starts the virtual machine in installation mode: [source,shell] .... -# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-13.1-RELEASE-amd64-bootonly.iso guestname +# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-14.0-RELEASE-amd64-bootonly.iso guestname .... The virtual machine will boot and start the installer. @@ -576,18 +588,20 @@ [[virtualization-bhyve-linux]] === Creating a Linux(R) Guest -In order to boot operating systems other than FreeBSD, the package:sysutils/grub2-bhyve[] port must be first installed. +Linux guests can be booted either like any other regular crossref:virtualization[virtualization-bhyve-uefi,"UEFI-based guest"] virtual machine, or alternatively, you can make use of the package:sysutils/grub2-bhyve[] port. -Next, create a file to use as the virtual disk for the guest machine: +To do this, first ensure that the port is installed, then create a file to use as the virtual disk for the guest machine: [source,shell] .... # truncate -s 16G linux.img .... -Starting a virtual machine with bhyve is a two step process. -First a kernel must be loaded, then the guest can be started. -The Linux(R) kernel is loaded with package:sysutils/grub2-bhyve[]. +Starting a Linux virtual machine with `grub2-bhyve` is a two step process. + +. First a kernel must be loaded, then the guest can be started. +. The Linux(R) kernel is loaded with package:sysutils/grub2-bhyve[]. + Create a [.filename]#device.map# that grub will use to map the virtual devices to the files on the host system: [.programlisting] @@ -676,7 +690,7 @@ [[virtualization-bhyve-uefi]] === Booting bhyve Virtual Machines with UEFI Firmware -In addition to bhyveload and grub-bhyve, the bhyve hypervisor can also boot virtual machines using the UEFI userspace firmware. +In addition to `bhyveload` and `grub-bhyve`, the bhyve hypervisor can also boot virtual machines using the UEFI firmware. This option may support guest operating systems that are not supported by the other loaders. In order to make use of the UEFI support in bhyve, first obtain the UEFI firmware images. @@ -694,6 +708,28 @@ guest .... +To allow a guest to store UEFI variables, you can use a variables file appended to the `-l` flag. +Note that bhyve will write guest modifications to the given variables file. +Therefore, be sure to first create a per-guest-copy of the variables template file: + +[source,shell] +.... +# cp /usr/local/share/uefi-firmware/BHYVE_UEFI_VARS.fd /path/to/vm-image/BHYVE_UEFI_VARS.fd +.... + +Then, add that variables file into your bhyve arguments: + +[source,shell] +.... +# bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \ +-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ +-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ +-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd,/path/to/vm-image/BHYVE_UEFI_VARS.fd \ +guest +.... + +You can use man:efivar[8] to view and modify the variables file contents from the host. + package:sysutils/bhyve-firmware[] also contains a CSM-enabled firmware, to boot guests with no UEFI support in legacy BIOS mode: [source,shell] @@ -749,6 +785,15 @@ -l com1,stdio -c 4 -m 1024M linuxguest .... +If you are using ZFS for the host as well as inside a guest, keep in mind the competing memory pressure of both systems caching the virtual machine's contents. +To alleviate this, consider setting the host's ZFS filesystems to use metadata-only cache. +To do this, apply the following settings to ZFS filesystems on the host, replacing `` with the name of the specific zvol dataset name of the virtual machine. + +[source,shell] +.... +# zfs set primarycache=metadata +.... + [[virtualization-bhyve-nmdm]] === Virtual Machine Consoles @@ -771,6 +816,16 @@ handbook login: .... +To disconnect from a console, enter a newline (i.e. press `RETURN`) follwed by tilde (`~`), and finally dot (`.`). +Keep in mind that only the connection is dropped while the login session remains active. +Another user connecting to the same console could therefore make use of any active sessions without having to first authenticate. +For security reasons, it's therefore recommended to logout before disconnecting. + +The number in the [.filename]#nmdm# device path must be unique for each virtual machine and must not be used by any other processes before bhyve starts. +The number can be chosen arbitrarily and does not need to be taken from a consecutive sequence of numbers. +The device node pair (i.e. [.filename]#/dev/nmdm0a# and [.filename]#/dev/nmdm0b#) are created dynamically when bhyve connects its console and destroyed when it shuts down. +Keep this in mind when creating scripts to start your virtual machines: you need to make sure that all virtual machines are assigned unique [.filename]#nmdm# devices. + [[virtualization-bhyve-managing]] === Managing Virtual Machines @@ -795,6 +850,58 @@ # bhyvectl --destroy --vm=guestname .... +Destroying a virtual machine this way means killing it immediately. Any unsaved data will be lost, open files and filesystems may get corrupted. +To gracefully shut down a virtual machine, you can instead send a `TERM` signal to its bhyve process. This triggers an ACPI shutdown event for the guest: + +[source,shell] +.... +# ps ax | grep bhyve +17424 - SC 56:48.27 bhyve: guestvm (bhyve) +# kill 17424 +.... + +[[virtualization-tools-utilities]] +=== Tools and Utilities + +There are numerous utilities and applications available in ports to help simplify setting up and managing bhyve virtual machines: + +.bhyve Managers +[options="header", cols="1,1,1,1"] +|=== +| Name | License | Package | Documentation + +| vm-bhyve +| BSD-2 +| package:sysutils/vm-bhyve[] +| link:https://github.com/churchers/vm-bhyve[Documentation] + +| CBSD +| BSD-2 +| package:sysutils/cbsd[] +| link:https://www.bsdstore.ru[Documentation] + +| Virt-Manager +| LGPL-3 +| package:deskutils/virt-manager[] +| link:https://virt-manager.org/[Documentation] + +| Bhyve RC Script +| Unknown +| package:sysutils/bhyve-rc[] +| link:https://www.freshports.org/sysutils/bhyve-rc/[Documentation] + +| bmd +| Unknown +| package:sysutils/bmd[] +| link:https://github.com/yuichiro-naito/bmd[Documentation] + +| vmstated +| BSD-2 +| package:sysutils/vmstated[] +| link:https://github.com/christian-moerz/vmstated[Documentation] + +|=== + [[virtualization-bhyve-onboot]] === Persistent Configuration @@ -965,7 +1072,7 @@ [source,shell] .... -# fetch https://download.freebsd.org/releases/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-bootonly.iso -o freebsd.iso +# fetch https://download.freebsd.org/releases/ISO-IMAGES/14.0/FreeBSD-14.0-RELEASE-amd64-bootonly.iso -o freebsd.iso .... A ZFS volume of 20 GB called [.filename]#xendisk0# is created to serve as the disk space for the VM.