Index: en_US.ISO8859-1/books/handbook/virtualization/chapter.xml =================================================================== --- en_US.ISO8859-1/books/handbook/virtualization/chapter.xml +++ en_US.ISO8859-1/books/handbook/virtualization/chapter.xml @@ -1367,4 +1367,348 @@ --> + + + &os; as a &xen;-Host + + Xen is a GPLv2-licensed type 1 + hypervisor for &intel; and &arm; architectures. &os; has + included &i386; and &amd; 64-Bit DomU and Amazon EC2 + unprivileged domain (virtual machine) support since + &os; 8.0 and includes Dom0 control domain (host) support in + &os; 11.0. Support for para-virtualized (PV) domains has + been removed from &os; 11 in favor of hardware virtualized + (HVM) domains. + + Xen is a baremetal hypervisor, which means that it is the + first program loaded after the BIOS. A special privileged guest + called the Domain-0 (Dom0 for short) is then + started. The Dom0 uses its special privileges to directly + access the underlying physical hardware, which makes it such a + high performance solution. That way, it is able to use device + drivers for accessing the disk controllers and network adapters. + The Xen management tools to manage and control the Xen + hypervisor are also used by the Dom0. Dom0 provides virtual + disks and networking for unprivileged domains, often called + domU. Xen Dom0 can be compared to the + service console of other hypervisor solutions, while the domU + perform the role of individual guest VMs. + + Features of &xen; include GPU passthrough from the host + running the Dom0 into a DomU guest machine. This requires VT-D + capable hardware (in the CPU, chipset, and BIOS) and may not + work with all graphics cards or requires extra patches to work. + A list of adapters can be found in the Xen + Wiki. Note that not all GPUs listed there are + supported on &os;. The &xen; hypervisor also supports + PCI-passthrough to assign a PCI device (NIC, disk controller, + soundcard, etc.) into a domU guest VM with full and direct + access to it. + + Xen can migrate VMs between different &xen; servers. When + the two xen hosts share the same underlying storage, the + migration can be done without having to shut the VM down first. + Instead, the migration is performed live while the domU is + running and there is no need to restart it or plan a downtime. + This is useful in maintenance scenarios or upgrade windows to + ensure that the services provided by the domU are still + provided. Many more features of Xen are listed on the Xen + Wiki Overview page. Note that not all features are + supported on &os; yet. + + + Hardware Requirements for &xen; Dom0 + + To run the &xen; hypervisor on a host, certain hardware + functionality is required. Hardware virtualized domains + require Extended Page Table (EPT) + and Input/Output Memory Management Unit (IOMMU) + support in the host processor. + + + + Xen Dom0 Control Domain Setup + + The emulators/xen metapackage including + the emulators/xen-kernel and + emulators/xen-tools packages is supported + by &os; 11 amd64 binary snapshots and equivalent systems + built from source. This example will assume VNC output for + unprivileged domains which will be accessed from a another + system using a tool such as + net/tightvnc. + + The emulators/xen metapackage must be + installed: + + &prompt.root; pkg install xen + + Once the package has been installed successfully, a couple + of configuration files need to be edited to prepare the host + for the Dom0 integration. An entry to + /etc/sysctl.conf must be made to disable + the limit on how many pages of memory are allowed to be wired + at the same time: + + &prompt.root; sysrc -f /etc/sysctl.conf vm.max_wired=-1 + + Another memory-related setting involves changing + /etc/login.conf and setting the + memorylocked option to + unlimited. Otherwise, creating domU + domains may fail with Cannot allocate + memory errors. After making the change to + /etc/login.conf, make sure to run + cap_mkdb; to update the capability + database. See for + details. + + &prompt.root; sed -i '' -e 's/memorylocked=64K/memorylocked=unlimited/' /etc/login.conf +&prompt.root; cap_mkdb /etc/login.conf + + An entry for the Xen console needs to be added to + /etc/ttys: + + &prompt.root; echo 'xc0 "/usr/libexec/getty Pc" xterm on secure' >> /etc/ttys + + In /boot/loader.conf, the Dom0 is + specified in terms of which Xen kernel to boot. &xen; also + requires some resources like CPU and memory from the host + machine to run itself and other domU domains. How much CPU + and memory depends on the individual requirements and + hardware capabilities. In this example, 8 GB of memory + and 4 virtual CPUs are made available for the Dom0. The + serial console is also activated and logging options are + defined. + + &prompt.root; sysrc -f /boot/loader.conf hw.pci.mcfg=0 +&prompt.root; xen_kernel="/boot/xen" +&prompt.root; xen_cmdline="dom0_mem=2048M dom0_max_vcpus=4 dom0pvh=1 console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all" + + Log files that Xen creates for the Dom0 and DomU VMs are + being stored in /var/log/xen. This + directory does not exist by default and must be + created. + + &prompt.root; mkdir -p /var/log/xen + + Xen provides its own boot menu to activate and + de-activate the hypervisor on demand in + /boot/menu.rc.local: + + &prompt.root; echo "try-include /boot/xen.4th" >> /boot/menu.rc.local + + The last step involves activating the + xendriverdomain and + xencommons services during system + startup: + + &prompt.root; sysrc xendriverdomain_enable=yes +&prompt.root; sysrc xencommons_enable=yes + + The above settings are enough to start a Dom0-enabled + system. However, it lacks network functionality for the + domU machines. To fix that, define a bridged interface with + the main NIC of the system that the DomU VMs can use to + connect to the network. Replace + igb0 with the network interface + name. + + &prompt.root; sysrc autobridge_interfaces=bridge0 +&prompt.root; sysrc autobridge_bridge0=igb0 +&prompt.root; echo 'ifconfig_bridge0="addm igb0 SYNCDHCP"' >> /etc/rc.conf + + Now that these changes have been made, it is time to + reboot the machine to load the xen kernel and start the + Dom0. + + &prompt.root; reboot + + After successfully booting the &xen; kernel and logging + into the system again, the Xen management tool + xl is used to print information about the + domains. + + &prompt.root; xl list +Name ID Mem VCPUs State Time(s) +Domain-0 0 8192 4 r----- 962.0 + + The output confirms that the Dom0 (called + Domain-0) as the ID 0 + and is in the running state. It also has the memory and + virtual CPUs available that were defined in + /boot/loader.conf earlier. More + information can be found in the Xen + Documentation. Now it is time to create the first + domU guest VM. + + + + Xen DomU Unprivileged Domain Configuration + + Unprivileged Domains consist of a configuration file and + logical or physical hard disks. Hard disks providing the + storage to the DomU can be files created by &man.truncate.1; + or ZFS volumes, as described in . In this example, the latter + is used to create a 20 GB volume with a &os; ISO image + to create a VM with 1 GB of RAM and 2 virtual CPUs. + First, the ISO installation file is retrieved using + &man.fetch.1; and saved locally in a file called + freebsd.iso. + + &prompt.root; fetch ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.3/FreeBSD-10.3-RELEASE-amd64-bootonly.iso -o freebsd.iso + + A ZFS volume of 20 GB called + xendisk0 is created to serve as the disk + space for the VM. + + &prompt.root; zfs create -V20G -o volmode=dev zroot/xendisk0 + + A new file will hold the definition of the new domU, + according to the virtual hardware defined above. Some + specific definitions like name, keymap, and VNC connection + details are also defined. The following + freebsd.cfg contains a minimum domU + configuration for our example: + + &prompt.root; cat freebsd.cfg +builder = "hvm" +name = "freebsd" +memory = 1024 +vcpus = 1 +vif = [ 'bridge=bridge0' ] +disk = [ +'/dev/zvol/tank/xendisk0,raw,hda,rw', +'/root/freebsd.iso,raw,hdc:cdrom,r' + ] +vnc = 1 +vnclisten = "0.0.0.0" +serial="pty" +usbdevice="tablet" + + The following lines are explained in more detail: + + + + + + This defines what kind of virtualization to use. In + this case, hvm refers to + hardware-assisted virtualization or hardware virtual + machine. For CPUs that support virtualization with + special extension, this means that guest operating systems + can run unmodified and very close to physical hardware + performance. + + + + An arbitrary name can be provided for this domU to + distinguish it from others running on the same Dom0. If no + name is provided, then the id is used to identify the + VM. + + + + The available main memory (in MB) for the VM to use. + This amount is subtracted from the hypervisor's total + available memory, not the memory of the Dom0. + + + + The amount of virtual CPUs that the guest machine can + use. It must not exceed the total available CPUs for the + Dom0 across all virtual machines. + + + + The virtual network adapter to use for this virtual + machine. This is the bridge defined earlier connected to + the hosts main NIC. + + + + The full path to the disk serving as the VM's storage + space. In this case, it is the path to the ZFS volume + defined earlier. Options are separated by commas and + multiple disk defintions are also separated that + way. + + + + Defines the boot medium from which the initial + operating system is installed. In this example, it is the + ISO imaged downloaded earlier. Consult the Xen + documentation for other kinds of devices and options to + set. + + + + Defines various options for VNC connectivity to the + serial console of the DomU. These are (in order): + activating VNC support, defining which IP address range to + listen at, the device node for the serial console, and the + input method for precise positioning for mouse and other + input methods. Additionally, the option + keymap defines what keymap to use + (english by default). + + + + After the file has been created with all the necessary + options, the domU can be created by passing it to xl + create as a parameter. + + &prompt.root; xl create freebsd.cfg + + + Each time the Dom0 is restarted, the configuration file + needs to be passed to xl create again to + re-create the domU. By default, only the Dom0 is created + after a reboot, not the individual VMs. The VMs can + continue where they left off as they stored the operating + system on the virtual disk. The virtual machine + configuration can change over time (for example, when adding + more memory). The virtual machine configuration files must + be properly backed up and kept available to be able to + re-create the domU when needed. + + + The output of xl list confirms that the + domU has been created. + + &prompt.root; xl list +Name ID Mem VCPUs State Time(s) +Domain-0 0 8192 4 r----- 1653.4 +freebsd 1 1024 1 -b---- 663.9 + + To begin the installation of the base operating system, + start the VNC viewer and direct it to the hosts main network + interface address (or the one defined in the + vnclisten line in + freebsd.cfg. After the operating system + has been installed, shut down the domU and disconnect the VNC + viewer. Edit freebsd.cfg and remove (or + comment using the # character at the + beginning) the line with the cdrom + definition. To load this new configuration, it is necessary + to remove the old domU with xl destroy, + passing either the name or the id as the parameter. + Afterwards, recreate it using the modified + freebsd.cfg. + + &prompt.root; xl destroy freebsd +&prompt.root; xl create freebsd.cfg + + The machine can then be accessed again using the VNC + viewer. This time, it will boot from the virtual disk where + the operating system has been installed and can be used as a + virtual machine. + +