diff --git a/documentation/content/en/books/handbook/virtualization/_index.adoc b/documentation/content/en/books/handbook/virtualization/_index.adoc --- a/documentation/content/en/books/handbook/virtualization/_index.adoc +++ b/documentation/content/en/books/handbook/virtualization/_index.adoc @@ -138,7 +138,7 @@ After this change the usage will be closer to 5%. . Create a New Kernel Configuration File + -All of the SCSI, FireWire, and USB device drivers can be removed from a custom kernel configuration file. +All SCSI, FireWire, and USB device drivers can be removed from a custom kernel configuration file. Parallels provides a virtual network adapter used by the man:ed[4] driver, so all network devices except for man:ed[4] and man:miibus[4] can be removed from the kernel. . Configure Networking + @@ -244,7 +244,7 @@ After this change, the usage will be closer to 5%. . Create a New Kernel Configuration File + -All of the FireWire, and USB device drivers can be removed from a custom kernel configuration file. +All the FireWire, and USB device drivers can be removed from a custom kernel configuration file. VMware Fusion provides a virtual network adapter used by the man:em[4] driver, so all network devices except for man:em[4] can be removed from the kernel. . Configure Networking + @@ -493,14 +493,20 @@ == FreeBSD as a Host with bhyve The bhyve BSD-licensed hypervisor became part of the base system with FreeBSD 10.0-RELEASE. -This hypervisor supports a number of guests, including FreeBSD, OpenBSD, many Linux(R) distributions, and Microsoft Windows(R). -By default, bhyve provides access to serial console and does not emulate a graphical console. +This hypervisor supports several guests, including FreeBSD, OpenBSD, many Linux(R) distributions, and Microsoft Windows(R). +By default, bhyve provides access to a serial console and does not emulate a graphical console. Virtualization offload features of newer CPUs are used to avoid the legacy methods of translating instructions and manually managing memory mappings. -The bhyve design requires a processor that supports Intel(R) Extended Page Tables (EPT) or AMD(R) Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT). +The bhyve design requires + +* an Intel(R) processor that supports Intel Extended Page Tables (EPT), +* or an AMD(R) processor that supports AMD Rapid Virtualization Indexing (RVI), or Nested Page Tables (NPT), +* or an ARM(R) aarch64 CPU. + +Only pure ARMv8.0 virtualization is supported on ARM, the Virtualization Host Extensions are not currently used. Hosting Linux(R) guests or FreeBSD guests with more than one vCPU requires VMX unrestricted mode support (UG). -The easiest way to tell if a processor supports bhyve is to run `dmesg` or look in [.filename]#/var/run/dmesg.boot# for the `POPCNT` processor feature flag on the `Features2` line for AMD(R) processors or `EPT` and `UG` on the `VT-x` line for Intel(R) processors. +The easiest way to tell if an Intel or AMD processor supports bhyve is to run `dmesg` or look in [.filename]#/var/run/dmesg.boot# for the `POPCNT` processor feature flag on the `Features2` line for AMD(R) processors or `EPT` and `UG` on the `VT-x` line for Intel(R) processors. [[virtualization-bhyve-prep]] === Preparing the Host @@ -514,7 +520,7 @@ .... There are several ways to connect a virtual machine guest to a host's network; one straightforward way to accomplish this is to create a [.filename]#tap# interface for the network device in the virtual machine to attach to. -In order for the network device to participate in the network, also create a bridge interface containing the [.filename]#tap# interface and the physical interface as members. +For the network device to participate in the network, also create a bridge interface containing the [.filename]#tap# interface and the physical interface as members. In this example, the physical interface is _igb0_: [source,shell] @@ -548,7 +554,7 @@ FreeBSD comes with an example script `vmrun.sh` for running a virtual machine in bhyve. It will start the virtual machine and run it in a loop, so it will automatically restart if it crashes. -`vmrun.sh` takes a number of options to control the configuration of the machine, including: +`vmrun.sh` takes several options to control the configuration of the machine, including: * `-c` controls the number of virtual CPUs, * `-m` limits the amount of memory available to the guest, @@ -557,27 +563,28 @@ * `-i` tells bhyve to boot from the CD image instead of the disk, and * `-I` defines which CD image to use. -The last parameter is the name of the virtual machine and used to track the running machines. -You can use the following command to get a list of all available program argument options: +The last parameter is the name of the virtual machine and is used to track the running machines. +The following command lists all available program argument options: [source,shell] .... -# sh /usr/share/examples/bhyve/vmrun.sh --usage +# sh /usr/share/examples/bhyve/vmrun.sh -h .... This example starts the virtual machine in installation mode: [source,shell] .... -# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-14.0-RELEASE-amd64-bootonly.iso guestname +# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img \ + -i -I FreeBSD-14.0-RELEASE-amd64-bootonly.iso guestname .... The virtual machine will boot and start the installer. -After installing a system in the virtual machine, when the system asks about dropping in to a shell at the end of the installation, choose btn:[Yes]. +After installing a system in the virtual machine, when the system asks about dropping into a shell at the end of the installation, choose btn:[Yes]. Reboot the virtual machine. While rebooting the virtual machine causes bhyve to exit, the [.filename]#vmrun.sh# script runs `bhyve` in a loop and will automatically restart it. -When this happens, choose the reboot option from the boot loader menu in order to escape the loop. +When this happens, choose the reboot option from the boot loader menu to escape the loop. Now the guest can be started from the virtual disk: [source,shell] @@ -597,7 +604,7 @@ # truncate -s 16G linux.img .... -Starting a Linux virtual machine with `grub2-bhyve` is a two step process. +Starting a Linux virtual machine with `grub2-bhyve` is a two-step process. . First a kernel must be loaded, then the guest can be started. . The Linux(R) kernel is loaded with package:sysutils/grub2-bhyve[]. @@ -637,8 +644,9 @@ [source,shell] .... -# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s 3:0,virtio-blk,./linux.img \ - -s 4:0,ahci-cd,./somelinux.iso -l com1,stdio -c 4 -m 1024M linuxguest +# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 \ + -s 3:0,virtio-blk,./linux.img -s 4:0,ahci-cd,./somelinux.iso \ + -l com1,stdio -c 4 -m 1024M linuxguest .... The system will boot and start the installer. @@ -693,7 +701,7 @@ In addition to `bhyveload` and `grub-bhyve`, the bhyve hypervisor can also boot virtual machines using the UEFI firmware. This option may support guest operating systems that are not supported by the other loaders. -In order to make use of the UEFI support in bhyve, first obtain the UEFI firmware images. +To make use of the UEFI support in bhyve, first obtain the UEFI firmware images. This can be done by installing package:sysutils/bhyve-firmware[] port or package. With the firmware in place, add the flags `-l bootrom,_/path/to/firmware_` to your bhyve command line. @@ -702,10 +710,10 @@ [source,shell] .... # bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \ --s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ --s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ --l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ -guest + -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ + -s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ + guest .... To allow a guest to store UEFI variables, you can use a variables file appended to the `-l` flag. @@ -722,23 +730,29 @@ [source,shell] .... # bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \ --s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ --s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ --l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd,/path/to/vm-image/BHYVE_UEFI_VARS.fd \ -guest + -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ + -s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd,/path/to/vm-image/BHYVE_UEFI_VARS.fd \ + guest .... -You can use man:efivar[8] to view and modify the variables file contents from the host. +[NOTE] +==== +Some Linux distributions require the use of UEFI variables to store the path for their UEFI boot file (using `linux64.efi` or `grubx64.efi` instead of `bootx64.efi`, for example). +It is therefore recommended to use a variables file for Linux virtual machines to avoid having to manually alter the boot partition files. +==== + +To view or modify the variables file contents, use man:efivar[8] from the host. package:sysutils/bhyve-firmware[] also contains a CSM-enabled firmware, to boot guests with no UEFI support in legacy BIOS mode: [source,shell] .... # bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \ --s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ --s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ --l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CSM.fd \ -guest + -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ + -s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CSM.fd \ + guest .... [[virtualization-bhyve-framebuffer]] @@ -756,16 +770,62 @@ [source,shell] .... # bhyve -AHP -s 0:0,hostbridge -s 31:0,lpc \ --s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ --s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ --s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \ --s 30,xhci,tablet \ --l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ -guest + -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \ + -s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \ + -s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \ + -s 30,xhci,tablet \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ + guest .... Note, in BIOS emulation mode, the framebuffer will cease receiving updates once control is passed from firmware to guest operating system. +[[virtualization-bhyve-windows]] +=== Creating a Microsoft Windows(R) Guest === + +Setting up a guest for Windows versions 10 or earlier can be done directly from the original installation media and is a relatively straightforward process. +Aside from minimum resource requirements, running Windows as guest requires + +* wiring virtual machine memory (flag `-w`) and +* booting with an UEFI bootrom. + +An example for booting a virtual machine guest with a Windows installation ISO: + +[source,shell] +.... +bhyve \ + -c 2 \ + -s 0,hostbridge \ + -s 3,nvme,windows2016.img \ + -s 4,ahci-cd,install.iso \ + -s 10,virtio-net,tap0 \ + -s 31,lpc \ + -s 30,xhci,tablet \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ + -m 8G -H -w \ + windows2016 +.... + +Only one or two VCPUs should be used during installation but this number can be increased once Windows is installed. + +link:https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md[VirtIO drivers] must be installed to use the defined `virtio-net` network interface. +An alternative is to switch to E1000 (Intel E82545) emulation by changing `virtio-net` to `e1000` in the above command line. +However, performance will be impacted. + +[[virtualization-bhyve-windows-win11]] +==== Creating a Windows 11 Guest ==== + +Beginning with Windows 11, Microsoft introduced a hardware requirement for a TPM 2 module. +bhyve supports passing a hardware TPM through to a guest. +The installation media can be modified to disable the relevant hardware checks. +A detailed description for this process can be found on the link:https://wiki.freebsd.org/bhyve/Windows#iso-remaster[FreeBSD Wiki]. + +[WARNING] +==== +Modifying Windows installation media and running Windows guests without a TPM module are unsupported by the manufacturer. +Consider your application and use case before implementing such approaches. +==== + [[virtualization-bhyve-zfs]] === Using ZFS with bhyve Guests @@ -781,8 +841,9 @@ [source,shell] .... -# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \ - -l com1,stdio -c 4 -m 1024M linuxguest +# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 \ + -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \ + -l com1,stdio -c 4 -m 1024M linuxguest .... If you are using ZFS for the host as well as inside a guest, keep in mind the competing memory pressure of both systems caching the virtual machine's contents. @@ -794,6 +855,442 @@ # zfs set primarycache=metadata .... +[[virtualiziation-bhyve-snapshot]] +=== Creating a Virtual Machine Snapshot === + +Modern hypervisors allow their users to create "snapshots" of their state; +such a snapshot includes a guest's disk, CPU, and memory contents. +A snapshot can usually be taken independent of whether the guest is running or shut down. +One can then reset and return the virtual machine to the precise state when the snapshot was taken. + +[[virtualization-bhyve-snapshot-zfs]] +==== ZFS Snapshots ==== + +Using ZFS volumes as the backing storage for a virtual machine enables the snapshotting of the guest's disk. For example: + +[source,shell] +.... +zfs snapshot zroot/path/to/zvol@snapshot_name +.... + +Though it is possible to snapshot a ZFS volume this way while the guest is running, keep in mind that the contents of the virtual disk may be in an inconsistent state while the guest is active. +It is therefore recommended to first shutdown or pause the guest before executing this command. +Pausing a guest is not supported by default and needs to be enabled first (see crossref:virtualization[virtualization-bhyve-snapshot-builtin,Memory and CPU Snapshots]) + +[WARNING] +==== +Rolling back a ZFS zvol to a snapshot while a virtual machine is using it may corrupt the file system contents and crash the guest. +All unsaved data in the guest will be lost and modifications since the last snapshot may get destroyed. + +A second rollback may be required once the virtual machine is shut down to restore the file system to a useable state. +This in turn will ultimately destroy any changes made after the snapshot. +==== + +[[virtualization-bhyve-snapshot-builtin]] +==== Memory and CPU Snapshots (Experimental Feature) ==== + +As of FreeBSD 13, bhyve has an experimental "snapshot" feature for dumping a guest's memory and CPU state to a file and then halting the virtual machine. +The guest can be resumed from the snapshot file contents later. + +However, this feature is not enabled by default and requires the system to be rebuilt from source. +See crossref:cutting-edge[updating-src-building, Building from Source] for an in-depth description on the process of compiling the kernel with custom options. + +[WARNING] +==== +The functionality is not ready for production use and limited to work for specific virtual machine configurations. +There are multiple limitations: + +* `nvme` and `virtio-blk` storage backends do not work yet +* snapshots are only supported when the guest uses a single kind of each device, i.e. if there is more than one `ahci-hd` disk attached, snapshot creation will fail +* additionally, the feature may be reasonably stable on Intel, but it probably won't work on AMD CPUs. +==== + +[NOTE] +==== +Make sure the [.filename]#/usr/src# directory is up-to date before taking the following steps. See crossref:cutting-edge[updating-src-obtaining-src, Updating the Source] for the detailed procedure how to do this. +==== + +First, add the following to [.filename]#/etc/src.conf#: + +[.programlisting] +.... +WITH_BHYVE_SNAPHOT=yes +BHYVE_SNAPSHOT=1 +MK_BHYVE_SNAPSHOT=yes +.... + +[NOTE] +==== +If the system was partially or wholly rebuilt, it is recommended to run + +[source,shell] +.... +# cd /usr/src +# make cleanworld +.... + +before proceeding. +==== + +Then follow the steps outlined in the crossref:cutting-edge[updating-src-quick-start,"Quick Start section of the Updating FreeBSD from Source"] chapter to build and install world and kernel. + +To verify successful activation of the snapshot feature, enter + +[source,shell] +.... +# bhyvectl --usage +.... + +and check if the output lists a `--suspend` flag. +If the flag is missing, the feature did not activate correctly. + +Then, you can snapshot and suspend a running virtual machine of your choice: + +[source,shell] +.... +# bhyvectl --vm=vmname --suspend=/path/to/snapshot/filename +.... + +[NOTE] +==== +Provide an absolute path and filename to `--suspend`. +Otherwise, bhyve will write snapshot data to whichever directory bhyve was started from. + +Make sure to write the snapshot data to a secure directory. +The generated output contains a full memory dump of the guest and may thus contain sensitive data (i.e. passwords)! +==== + +This creates three files: + +* memory snapshot - named like the input to `--suspend` +* kernel file - name like the input to `--suspend` with the suffix [.filename]#.kern# +* metadata - contains meta data about the system state, named with the suffix [.filename]#.meta# + +To restore a guest from a snapshot, use the `-r` flag with `bhyve`: + +[source,shell] +.... +# bhyve -r /path/to/snapshot/filename +.... + +Restoring a guest snapshot on a different CPU architecture will not work. +Generally, attempting to restore on a system not identical to the snapshot creator will likely fail. + +[[virtualization-bhyve-jailed]] +=== Jailing bhyve === + +For improved security and separation of virtual machines from the host operating system, it is possible to run bhyve in a jail. +See crossref:jails[,Jails] for an in-depth description of jails and their security benefits. + +[[virtualization-bhyve-jailed-creation]] +==== Creating a Jail for bhyve ==== + +First, create a jail environment. If using a UFS file system, simply run: + +[source,shell] +.... +# mkdir -p /jails/bhyve +.... + +If using a crossref:zfs[,ZFS filesystem], use the following commands: + +[source,shell] +.... +# zfs create zroot/jails +# zfs create zroot/jails/bhyve +.... + +Then create a ZFS zvol for the virtual machine `bhyvevm0`: + +[source,shell] +.... +# zfs create zroot/vms +# zfs create -V 20G zroot/vms/bhyvevm0 +.... + +If not using ZFS, use the following commands to create a disk image file directly in the jail directory structure: + +[source,shell] +.... +# mkdir /jails/bhyve/vms +# truncate -s 20G /jails/bhyve/vms/bhyvevm0 +.... + +Download a FreeBSD image, preferably a version equal to or older than the host and extract it into the jail directory: + +[source,shell] +.... +# cd /jails +# fetch -o base.txz http://ftp.freebsd.org/pub/FreeBSD/releases/amd64/13.2-RELEASE/base.txz +# tar -C /jails/bhyve -xvf base.txz +.... + +[NOTE] +==== +Running a higher FreeBSD version in a jail than the host is unsupported (i.e. running 14.0-RELEASE in a jail, embedded in a 13.2-RELEASE host). +==== + +Next, add a devfs ruleset to [.filename]#/etc/devfs.rules#: + +[.programlisting] +.... +[devfsrules_jail_bhyve=100] +add include $devfsrules_hide_all +add include $devfsrules_unhide_login +add path 'urandom' unhide +add path 'random' unhide +add path 'crypto' unhide +add path 'shm' unhide +add path 'zero' unhide +add path 'null' unhide +add path 'mem' unhide +add path 'vmm' unhide +add path 'vmm/*' unhide +add path 'vmm.io' unhide +add path 'vmm.io/*' unhide +add path 'nmdmbhyve*' unhide +add path 'zvol' unhide +add path 'zvol/zroot' unhide +add path 'zvol/zroot/vms' unhide +add path 'zvol/zroot/vms/bhyvevm0' unhide +add path 'zvol/zroot/vms/bhyvevm1' unhide +add path 'tap10*' unhide +.... + +[NOTE] +==== +If there's another devfs rule with the numeric ID 100 in your [.filename]#/etc/devfs.rules# file, replace the one in the listing with another yet unused ID number. +==== + +[NOTE] +==== +If not using a ZFS filesystem, skip the related zvol rules in [.filename]#/etc/devfs.rules#: + +[.programlisting] +.... +add path 'zvol' unhide +add path 'zvol/zroot' unhide +add path 'zvol/zroot/vms' unhide +add path 'zvol/zroot/vms/bhyvevm0' unhide +add path 'zvol/zroot/vms/bhyvevm1' unhide +.... +==== + +These rules will cause bhyve to + +* create a virtual machine with disk volumes called `bhyvevm0` and `bhyvevm1`, +* use [.filename]#tap# network interfaces with the name prefix `tap10`. +That means, valid interface names will be `tap10`, `tap100`, `tap101`, ... `tap109`, `tap1000` and so on. ++ +Limiting the access to a subset of possible [.filename]#tap# interface names will prevent the jail (and thus bhyve) from seeing [.filename]#tap# interfaces of the host and other jails. +* use [.filename]#nmdm# devices prefixed with "bhyve", i.e. [.filename]#/dev/nmdmbhyve0#. + +Those rules can be expanded and varied with different guest and interface names as desired. + +[NOTE] +==== +If you intend to use bhyve on the host as well as in a one or more jails, remember that [.filename]#tap# and [.filename]#nmdm# interface names will operate in a shared environment. +For example, you can use [.filename]#/dev/nmdmbhyve0# only either for bhyve on the host or in a jail. +==== + +Restart devfs for the changes to be loaded: + +[source,shell] +.... +# service devfs restart +.... + +Then add a definition for your new jail into [.filename]#/etc/jail.conf# or [.filename]#/etc/jail.conf.d#. +Replace the interface number [.filename]#$if# and IP address with your personal variations. + +.Using NAT or routed traffic with a firewall +[example] +==== +[.programlisting] +.... +bhyve { + $if = 0; + exec.prestart = "/sbin/ifconfig epair${if} create up"; + exec.prestart += "/sbin/ifconfig epair${if}a up"; + exec.prestart += "/sbin/ifconfig epair${if}a name ${name}0"; + exec.prestart += "/sbin/ifconfig epair${if}b name jail${if}"; + exec.prestart += "/sbin/ifconfig ${name}0 inet 192.168.168.1/27"; + exec.prestart += "/sbin/sysctl net.inet.ip.forwarding=1"; + + exec.clean; + + host.hostname = "your-hostname-here"; + vnet; + vnet.interface = "em${if}"; + path = "/jails/${name}"; + persist; + securelevel = 3; + devfs_ruleset = 100; + mount.devfs; + + allow.vmm; + + exec.start += "/bin/sh /etc/rc"; + exec.stop = "/bin/sh /etc/rc.shutdown"; + + exec.poststop += "/sbin/ifconfig ${name}0 destroy"; +} +.... + +This example assumes use of a firewall like `pf` or `ipfw` to NAT your jail traffic. +See the crossref:firewalls[,Firewalls] chapter for more details on the available options to implement this. +==== +.Using a bridged network connection +[example] +==== +[.programlisting] +.... +bhyve { + $if = 0; + exec.prestart = "/sbin/ifconfig epair${if} create up"; + exec.prestart += "/sbin/ifconfig epair${if}a up"; + exec.prestart += "/sbin/ifconfig epair${if}a name ${name}0"; + exec.prestart += "/sbin/ifconfig epair${if}b name jail${if}"; + exec.prestart += "/sbin/ifconfig bridge0 addm ${name}0"; + exec.prestart += "/sbin/sysctl net.inet.ip.forwarding=1"; + + exec.clean; + + host.hostname = "your-hostname-here"; + vnet; + vnet.interface = "em${if}"; + path = "/jails/${name}"; + persist; + securelevel = 3; + devfs_ruleset = 100; + mount.devfs; + + allow.vmm; + + exec.start += "/bin/sh /etc/rc"; + exec.stop = "/bin/sh /etc/rc.shutdown"; + + exec.poststop += "/sbin/ifconfig ${name}0 destroy"; +} +.... +==== + +[NOTE] +==== +If you previously replaced the devfs ruleset ID 100 in [.filename]#/etc/devfs.rules# with your own unique number, remember to replace the numeric ID also in your [.filename]#jails.conf# too. +==== + +[[virtualization-bhyve-jailed-config]] +==== Configuring the Jail ==== + +To start the jail for the first time and do some additional configuration work, enter: + +[source,shell] +.... +# cp /etc/resolv.conf /jails/bhyve/etc +# service jail onestart bhyve +# jexec bhyve +# sysrc ifconfig_jail0="inet 192.168.168.2/27" +# sysrc defaultrouter="192.168.168.1" +# sysrc sendmail_enable=NONE +# sysrc cloned_interfaces="tap100" +# exit +.... + +Restart and enable the jail: + +[source,shell] +.... +# sysrc jail_enable=YES +# service jail restart bhyve +.... + +Afterwards, you can create a virtual machine within the jail. +For a FreeBSD guest, download an installation ISO first: + +[source,shell] +.... +# jexec bhyve +# cd /vms +# fetch -o freebsd.iso https://download.freebsd.org/releases/ISO-IMAGES/14.0/FreeBSD-14.0-RELEASE-amd64-bootonly.iso +.... + +[[virtualization-bhyve-jailed-createvm]] +==== Creating a Virtual Machine Inside the Jail ==== + +To create a virtual machine, use `bhyvectl` to initialize it first: + +[source,shell] +.... +# jexec bhyve +# bhyvectl --create --vm=bhyvevm0 +.... + +[NOTE] +==== +Creating the guest with `bhyvectl` may be required when initiating the virtual machine from a jail. +Skipping this step may cause the following error message when starting `bhyve`: + +`vm_open: vm-name could not be opened. No such file or directory` +==== + +Finally, use your preferred way of starting the guest. + +.Starting with `vmrun.sh` and ZFS +[example] +==== +Using `vmrun.sh` on a ZFS filesystems: + +[source,shell] +.... +# jexec bhyve +# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M \ + -t tap100 -d /dev/zvols/zroot/vms/bhyvevm0 -i -I /vms/FreeBSD-14.0-RELEASE-amd64-bootonly.iso bhyvevm0 +.... +==== + +.Starting with `vmrun.sh` and UFS +[example] +==== +Using `vmrun.sh` on a UFS filesystem: + +[source,shell] +.... +# jexec bhyve +# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M \ + -t tap100 -d /vms/bhyvevm0 -i -I /vms/FreeBSD-14.0-RELEASE-amd64-bootonly.iso bhyvevm0 +.... +==== + +.Starting bhyve for an UEFI guest with ZFS +[example] +==== +If instead you want to use an UEFI guest, remember to first install the required firmware package package:sysutils/bhyve-firmware[] in the jail: + +[source,shell] +.... +# pkg -j bhyve install bhyve-firmware +.... + +Then use `bhyve` directly: + +[source,shell] +.... +# bhyve -A -c 4 -D -H -m 2G \ + -s 0,hostbridge \ + -s 1,lpc \ + -s 2,virtio-net,tap100 \ + -s 3,virtio-blk,/dev/zvol/zroot/vms/bhyvevm0 \ + -s 4,ahci-cd,/vms/FreeBSD-14.0-RELEASE-amd64-bootonly.iso \ + -s 31,fbuf,tcp=127.0.0.1:5900,w=1024,h=800,tablet \ + -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \ + -l com1,/dev/nmdbbhyve0A \ + bhyvevm0 +.... + +This will allow you to connect to your virtual machine `bhyvevm0` through VNC as well as a serial console at [.filename]#/dev/nmdbbhyve0B#. +==== + [[virtualization-bhyve-nmdm]] === Virtual Machine Consoles @@ -816,7 +1313,7 @@ handbook login: .... -To disconnect from a console, enter a newline (i.e. press `RETURN`) follwed by tilde (`~`), and finally dot (`.`). +To disconnect from a console, enter a newline (i.e. press `RETURN`) followed by tilde (`~`), and finally dot (`.`). Keep in mind that only the connection is dropped while the login session remains active. Another user connecting to the same console could therefore make use of any active sessions without having to first authenticate. For security reasons, it's therefore recommended to logout before disconnecting. @@ -851,7 +1348,7 @@ .... Destroying a virtual machine this way means killing it immediately. Any unsaved data will be lost, open files and filesystems may get corrupted. -To gracefully shut down a virtual machine, you can instead send a `TERM` signal to its bhyve process. This triggers an ACPI shutdown event for the guest: +To gracefully shut down a virtual machine, send a `TERM` signal to its bhyve process instead. This triggers an ACPI shutdown event for the guest: [source,shell] .... @@ -905,11 +1402,13 @@ [[virtualization-bhyve-onboot]] === Persistent Configuration -In order to configure the system to start bhyve guests at boot time, the following configurations must be made in the specified files: +In order to configure the system to start bhyve guests at boot time, some configuration file changes are required. [.procedure] . [.filename]#/etc/sysctl.conf# + +When using [.filename]#tap# interfaces as network backend, you either need to manually set each used [.filename]#tap# interface to UP or simply set the following sysctl: ++ [.programlisting] .... net.link.tap.up_on_open=1 @@ -917,13 +1416,39 @@ . [.filename]#/etc/rc.conf# + -[.programlisting] +To connect your virtual machine's [.filename]#tap# device to the network via a [.filename]#bridge#, you need to persist the device settings in [.filename]#/etc/rc.conf#. +Additionally, you can load the necessary kernel modules `vmm` for bhyve and `nmdm` for [.filename]#nmdm# devices through the `kld_list` configuration variable. +When configuring `ifconfig_bridge0`, make sure to replace `/` with the actual IP address of your physical interface ([.filename]#igb0# in this example) and remove IP settings from your physical device. ++ +[source,shell] .... -cloned_interfaces="bridge0 tap0" -ifconfig_bridge0="addm igb0 addm tap0" -kld_list="nmdm vmm" +# sysrc cloned_interfaces+="bridge0 tap0" +# sysrc ifconfig_bridge0="inet / addm igb0 addm tap0" +# sysrc kld_list+="nmdm vmm" +# sysrc ifconfig_igb0="up" .... +[[virtualization-bhyve-onboot-bridgenet]] +.Setting the IP for a bridge device +[example] +==== +For a host with an _igb0_ interface connected to the network with IP `10.10.10.1` and netmask `255.255.255.0`, you would use the following commands: + +[source,shell] +.... +# sysrc ifconfig_igb0="up" +# sysrc ifconfig_bridge0="inet 10.10.10.1/24 addm igb0 addm tap0" +# sysrc kld_list+="nmdm vmm" +# sysrc cloned_interfaces+="bridge0 tap0" +.... +==== + +[WARNING] +==== +Modifying the IP address configuration of a system may lock you out if you are executing these commands while you are connected remotely (i.e. via SSH)! +Take precautions to maintain system access or make those modifications while logged in on a local terminal session. +==== + [[virtualization-host-xen]] == FreeBSD as a Xen(TM)-Host @@ -993,7 +1518,7 @@ Xen(TM) also requires resources like CPU and memory from the host machine for itself and other DomU domains. How much CPU and memory depends on the individual requirements and hardware capabilities. In this example, 8 GB of memory and 4 virtual CPUs are made available for the Dom0. -The serial console is also activated and logging options are defined. +The serial console is also activated, and logging options are defined. The following command is used for Xen 4.7 packages: @@ -1165,9 +1690,9 @@ ==== Host Boot Troubleshooting Please note that the following troubleshooting tips are intended for Xen(TM) 4.11 or newer. -If you are still using Xen(TM) 4.7 and having issues consider migrating to a newer version of Xen(TM). +If you are still using Xen(TM) 4.7 and having issues, consider migrating to a newer version of Xen(TM). -In order to troubleshoot host boot issues you will likely need a serial cable, or a debug USB cable. +In order to troubleshoot host boot issues, you will likely need a serial cable, or a debug USB cable. Verbose Xen(TM) boot output can be obtained by adding options to the `xen_cmdline` option found in [.filename]#loader.conf#. A couple of relevant debug options are: @@ -1210,7 +1735,7 @@ ... .... -If the verbose output does not help diagnose the issue there are also QEMU and Xen(TM) toolstack logs in [.filename]#/var/log/xen#. +If the verbose output does not help diagnose the issue, there are also QEMU and Xen(TM) toolstack logs in [.filename]#/var/log/xen#. Note that the name of the domain is appended to the log name, so if the domain is named `freebsd` you should find a [.filename]#/var/log/xen/xl-freebsd.log# and likely a [.filename]#/var/log/xen/qemu-dm-freebsd.log#. Both log files can contain useful information for debugging. If none of this helps solve the issue, please send the description of the issue you are facing and as much information as possible to mailto:freebsd-xen@FreeBSD.org[freebsd-xen@FreeBSD.org] and mailto:xen-devel@lists.xenproject.org[xen-devel@lists.xenproject.org] in order to get help.