Index: /home/bcr/dochead/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
===================================================================
--- /home/bcr/dochead/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
+++ /home/bcr/dochead/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
@@ -30,6 +30,16 @@
bhyve section by
+
+
+
+
+ Benedict
+ Reuschling
+
+ Xen section by
+
+
@@ -1354,17 +1364,349 @@
-
+ Xen can migrate VMs between different &xen; servers. When
+ the two xen hosts share the same underlying storage, the
+ migration can be done without having to shut the VM down first.
+ Instead, the migration is performed live while the DomU is
+ running and there is no need to restart it or plan a downtime.
+ This is useful in maintenance scenarios or upgrade windows to
+ ensure that the services provided by the DomU are still
+ provided. Many more features of Xen are listed on the Xen
+ Wiki Overview page. Note that not all features are
+ supported on &os; yet.
+
+
+ Hardware Requirements for &xen; Dom0
+
+ To run the &xen; hypervisor on a host, certain hardware
+ functionality is required. Hardware virtualized domains
+ require Extended Page Table (EPT)
+ and Input/Output Memory Management Unit (IOMMU)
+ support in the host processor.
--->
+
+
+ Xen Dom0 Control Domain Setup
+
+ The emulators/xen metapackage including
+ the emulators/xen-kernel and
+ emulators/xen-tools packages is supported
+ by &os; 11 amd64 binary snapshots and equivalent systems
+ built from source. This example will assume VNC output for
+ unprivileged domains which will be accessed from a another
+ system using a tool such as
+ net/tightvnc.
+
+ The emulators/xen metapackage must be
+ installed:
+
+ &prompt.root; pkg install xen
+
+ Once the package has been installed successfully, a couple
+ of configuration files need to be edited to prepare the host
+ for the Dom0 integration. An entry to
+ /etc/sysctl.conf must be made to disable
+ the limit on how many pages of memory are allowed to be wired
+ at the same time:
+
+ &prompt.root; sysrc -f /etc/sysctl.conf vm.max_wired=-1
+
+ Another memory-related setting involves changing
+ /etc/login.conf and setting the
+ memorylocked option to
+ unlimited. Otherwise, creating DomU
+ domains may fail with Cannot allocate
+ memory errors. After making the change to
+ /etc/login.conf, make sure to run
+ cap_mkdb; to update the capability
+ database. See for
+ details.
+
+ &prompt.root; sed -i '' -e 's/memorylocked=64K/memorylocked=unlimited/' /etc/login.conf
+&prompt.root; cap_mkdb /etc/login.conf
+
+ An entry for the Xen console needs to be added to
+ /etc/ttys:
+
+ &prompt.root; echo 'xc0 "/usr/libexec/getty Pc" xterm on secure' >> /etc/ttys
+
+ In /boot/loader.conf, the Dom0 is
+ specified in terms of which Xen kernel to boot. &xen; also
+ requires some resources like CPU and memory from the host
+ machine to run itself and other DomU domains. How much CPU
+ and memory depends on the individual requirements and
+ hardware capabilities. In this example, 8 GB of memory
+ and 4 virtual CPUs are made available for the Dom0. The
+ serial console is also activated and logging options are
+ defined.
+
+ &prompt.root; sysrc -f /boot/loader.conf hw.pci.mcfg=0
+&prompt.root; xen_kernel="/boot/xen"
+&prompt.root; xen_cmdline="dom0_mem=8192M dom0_max_vcpus=4 dom0pvh=1 console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"
+
+ Log files that Xen creates for the Dom0 and DomU VMs are
+ being stored in /var/log/xen. This
+ directory does not exist by default and must be
+ created.
+
+ &prompt.root; mkdir -p /var/log/xen
+
+ Xen provides its own boot menu to activate and
+ de-activate the hypervisor on demand in
+ /boot/menu.rc.local:
+
+ &prompt.root; echo "try-include /boot/xen.4th" >> /boot/menu.rc.local
+
+ The last step involves activating the
+ xencommons service during system
+ startup:
+
+ &prompt.root; sysrc xencommons_enable=yes
+
+ The above settings are enough to start a Dom0-enabled
+ system. However, it lacks network functionality for the
+ DomU machines. To fix that, define a bridged interface with
+ the main NIC of the system that the DomU VMs can use to
+ connect to the network. Replace
+ igb0 with the network interface
+ name.
+
+ &prompt.root; sysrc autobridge_interfaces=bridge0
+&prompt.root; sysrc autobridge_bridge0=igb0
+&prompt.root; sysrc ifconfig_bridge0=SYNCDHCP
+
+ Now that these changes have been made, it is time to
+ reboot the machine to load the xen kernel and start the
+ Dom0.
+
+ &prompt.root; reboot
+
+ After successfully booting the &xen; kernel and logging
+ into the system again, the Xen management tool
+ xl is used to print information about the
+ domains.
+
+ &prompt.root; xl list
+Name ID Mem VCPUs State Time(s)
+Domain-0 0 8192 4 r----- 962.0
+
+ The output confirms that the Dom0 (called
+ Domain-0) as the ID 0
+ and is in the running state. It also has the memory and
+ virtual CPUs available that were defined in
+ /boot/loader.conf earlier. More
+ information can be found in the Xen
+ Documentation. Now it is time to create the first
+ DomU guest VM.
+
+
+
+ Xen DomU Unprivileged Domain Configuration
+
+ Unprivileged Domains consist of a configuration file and
+ logical or physical hard disks. Hard disks providing the
+ storage to the DomU can be files created by &man.truncate.1;
+ or ZFS volumes, as described in . In this example, the latter
+ is used to create a 20 GB volume with a &os; ISO image
+ to create a VM with 1 GB of RAM and 2 virtual CPUs.
+ First, the ISO installation file is retrieved using
+ &man.fetch.1; and saved locally in a file called
+ freebsd.iso.
+
+ &prompt.root; fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.3/FreeBSD-10.3-RELEASE-amd64-bootonly.iso -o freebsd.iso
+
+ A ZFS volume of 20 GB called
+ xendisk0 is created to serve as the disk
+ space for the VM.
+
+ &prompt.root; zfs create -V20G -o volmode=dev zroot/xendisk0
+
+ A new file will hold the definition of the new DomU,
+ according to the virtual hardware defined above. Some
+ specific definitions like name, keymap, and VNC connection
+ details are also defined. The following
+ freebsd.cfg contains a minimum DomU
+ configuration for our example:
+
+ &prompt.root; cat freebsd.cfg
+builder = "hvm"
+name = "freebsd"
+memory = 1024
+vcpus = 1
+vif = [ 'mac=00:16:3E:74:34:32,bridge=bridge0' ]
+disk = [
+'/dev/zvol/tank/xendisk0,raw,hda,rw',
+'/root/freebsd.iso,raw,hdc:cdrom,r'
+ ]
+vnc = 1
+vnclisten = "0.0.0.0"
+serial="pty"
+usbdevice="tablet"
+
+ The following lines are explained in more detail:
+
+
+
+ This defines what kind of virtualization to use. In
+ this case, hvm refers to
+ hardware-assisted virtualization or hardware virtual
+ machine. For CPUs that have virtualization extensions,
+ this means that guest operating systems can run unmodified
+ and very close to physical hardware performance.
+
+
+
+ An arbitrary name can be provided for this DomU to
+ distinguish it from others running on the same Dom0. The
+ name is mandatory.
+
+
+
+ The available main memory (in MB) for the VM to use.
+ This amount is subtracted from the hypervisor's total
+ available memory, not the memory of the Dom0.
+
+
+
+ The number of virtual CPUs that the guest machine can
+ use. For performance reasons, it is not recommended to
+ create guests with a number of virtual CPUs greater than
+ the total number of physical CPUs available on the
+ system.
+
+
+
+ The virtual network adapter to use for this virtual
+ machine. This is the bridge defined earlier connected to
+ the hosts main NIC. The mac parameter
+ contains the MAC address used by the virtual network
+ interface. This parameter is optional, if no MAC is
+ provided Xen will generate a random one.
+
+
+
+ The full path to the disk serving as the VM's storage
+ space. In this case, it is the path to the ZFS volume
+ defined earlier. Options are separated by commas and
+ multiple disk defintions are also separated that
+ way.
+
+
+
+ Defines the boot medium from which the initial
+ operating system is installed. In this example, it is the
+ ISO imaged downloaded earlier. Consult the Xen
+ documentation for other kinds of devices and options to
+ set.
+
+
+
+ Defines various options for VNC connectivity to the
+ serial console of the DomU. These are (in order):
+ activating VNC support, defining which IP address range to
+ listen at, the device node for the serial console, and the
+ input method for precise positioning for mouse and other
+ input methods. Additionally, the option
+ keymap defines what keymap to use
+ (english by default).
+
+
+
+ After the file has been created with all the necessary
+ options, the DomU can be created by passing it to xl
+ create as a parameter.
+
+ &prompt.root; xl create freebsd.cfg
+
+
+ Each time the Dom0 is restarted, the configuration file
+ needs to be passed to xl create again to
+ re-create the DomU. By default, only the Dom0 is created
+ after a reboot, not the individual VMs. The VMs can
+ continue where they left off as they stored the operating
+ system on the virtual disk. The virtual machine
+ configuration can change over time (for example, when adding
+ more memory). The virtual machine configuration files must
+ be properly backed up and kept available to be able to
+ re-create the DomU when needed.
+
+
+ The output of xl list confirms that the
+ DomU has been created.
+
+ &prompt.root; xl list
+Name ID Mem VCPUs State Time(s)
+Domain-0 0 8192 4 r----- 1653.4
+freebsd 1 1024 1 -b---- 663.9
+
+ To begin the installation of the base operating system,
+ start the VNC viewer and direct it to the hosts main network
+ interface address (or the one defined in the
+ vnclisten line in
+ freebsd.cfg. After the operating system
+ has been installed, shut down the DomU and disconnect the VNC
+ viewer. Edit freebsd.cfg and remove (or
+ comment using the # character at the
+ beginning) the line with the cdrom
+ definition. To load this new configuration, it is necessary
+ to remove the old DomU with xl destroy,
+ passing either the name or the id as the parameter.
+ Afterwards, recreate it using the modified
+ freebsd.cfg.
+
+ &prompt.root; xl destroy freebsd
+&prompt.root; xl create freebsd.cfg
+
+ The machine can then be accessed again using the VNC
+ viewer. This time, it will boot from the virtual disk where
+ the operating system has been installed and can be used as a
+ virtual machine.
+