diff --git a/en_US.ISO8859-1/articles/remote-install/article.sgml b/en_US.ISO8859-1/articles/remote-install/article.sgml index 6d0a8406b8..277290942e 100644 --- a/en_US.ISO8859-1/articles/remote-install/article.sgml +++ b/en_US.ISO8859-1/articles/remote-install/article.sgml @@ -1,561 +1,561 @@ %articles.ent; ]>
Remote Installation of the &os; Operating System without a Remote Console Daniel Gerzo
danger@FreeBSD.org
$FreeBSD$ &tm-attrib.freebsd; &tm-attrib.general; 2008 The &os; Documentation Project This article documents the remote installation of the &os; operating system when the console of the remote system is unavailable. The main idea behind this article is the result of a collaboration with &a.mm; with valuable input provided by &a.pjd;.
Background There are many server hosting providers in the world, but very few of them are officially supporting &os;. They usually provide support for a &linux; distribution to be installed on the servers they offer. In some cases, these companies will install your preferred &linux; distribution if you request it. Using this option, we will attempt to install &os;. In other cases, they may offer a rescue system which would be used in an emergency. It's possible to use this for our purposes as well. This article covers the basic installation and configuration steps required to bootstrap a remote installation of &os; with RAID-1 and ZFS capabilities. Introduction This section will summarize the purpose of this article and better explain what is covered herein. The instructions included in this article will benefit those using services provided by colocation facilities not supporting &os;. As we have mentioned in the Background section, many of the reputable server hosting companies provide some kind of rescue system, which is booted from their LAN and accessible over SSH. They usually provide this support in order to help their customers fix broken operating systems. As this article will explain, it is possible to install &os; with the help of these rescue systems. The next section of this article will describe how to configure, and build minimalistic &os; on the local machine. That version will eventually be running on the remote machine from a ramdisk, which will allow us to install a complete &os; operating system from an FTP mirror using the sysinstall utility. The rest of this article will describe the installation procedure itself, as well as the configuration of the ZFS file system. Requirements To continue successfully, you must: have a network accessible operating system with SSH access understand the &os; installation process be familiar with the &man.sysinstall.8; utility have the &os; installation ISO image or CD handy Preparation - <application>mfsBSD</application> Before &os; may be installed on the target system, it is necessary to build the minimal &os; operating system image which will boot from the hard drive. This way the new system can be accessed from the network, and the rest of the installation can be done without remote access to the system console. The mfsBSD tool-set can be used to build a tiny &os; image. As the name of mfsBSD suggests (mfs means memory file system), the resulting image runs entirely from a ramdisk. Thanks to this feature, the manipulation of hard drives will not be limited, therefore it will be possible to install a complete &os; operating system. The home page of mfsBSD, at , includes pointers to the latest release of the toolset. Please note that the internals of mfsBSD and how it all fits together is beyond the scope of this article. The interested reader should consult the original documentation of mfsBSD for more details. Download and extract the latest mfsBSD release and change your working directory to the directory where the mfsBSD scripts will reside: &prompt.root; fetch http://people.freebsd.org/~mm/mfsbsd/mfsbsd-1.0-beta1.tar.gz &prompt.root; tar xvzf mfsbsd-1.0-beta1.tar.gz &prompt.root; cd mfsbsd-1.0-beta1/ Configuration of <application>mfsBSD</application> Before booting mfsBSD, a few important configuration options have to be set. The most important that we have to get right is, naturally, the network setup. The most suitable method to configure networking options depends on whether we know beforehand the type of the network interface we will use, and the network interface driver to be loaded for our hardware. We will see how mfsBSD can be configured in either case. Another important thing to set is the root password. This can be done by editing the conf/rootpw.conf file. Please keep in mind that the file will contain your password in the plain text, thus we do not recommend to use real password here. Nevertheless, this is just a temporary one-time password which can be later changed in a live system. The <filename>conf/interfaces.conf</filename> method When the installed network interface card is unknown, we can use the auto-detection features of mfsBSD. The startup scripts of mfsBSD can detect the correct driver to use, based on the MAC address of the interface, if we set the following options in conf/interfaces.conf: initconf_interfaces="ext1" initconf_mac_ext1="00:00:00:00:00:00" initconf_ip_ext1="192.168.0.2" initconf_netmask_ext1="255.255.255.0" Do not forget to add the defaultrouter information to the conf/rc.conf file: defaultrouter="192.168.0.1" The <filename>conf/rc.conf</filename> method When the network interface driver is known, it is more convenient to use the conf/rc.conf file for networking options. The syntax of this file is the same as the one used in the standard &man.rc.conf.5; file of &os;. For example, if you know that a &man.re.4; network interface is going to be available, you can set the following options in conf/rc.conf: defaultrouter="192.168.0.1" ifconfig_re0="inet 192.168.0.2 netmask 255.255.255.0" Building an <application>mfsBSD</application> image The process of building an mfsBSD image is pretty straightforward. The first step is to mount the &os; installation CD, or the installation ISO image to /cdrom. For the sake of example, in this article we will assume that you have downloaded the &os; 7.0-RELEASE ISO. Mounting this ISO image to the /cdrom directory is easy with the &man.mdconfig.8; utility: &prompt.root; mdconfig -a -t vnode -u 10 -f 7.0-RELEASE-amd64-disc1.iso &prompt.root; mount_cd9660 /dev/md10 /cdrom Next, build the bootable mfsBSD image: &prompt.root; make BASE=/cdrom/7.0-RELEASE The above make command has to be run from the top level of the mfsBSD directory tree, i.e. ~/mfsbsd-1.0-beta1/. Booting <application>mfsBSD</application> Now that the mfsBSD image is ready, it must be uploaded to the remote system running a live rescue system or pre-installed &linux; distribution. The most suitable tool for this task is scp: &prompt.root; scp disk.img root@192.168.0.2:. To boot mfsBSD image properly, it must be placed on the first (bootable) device of the given machine. This may be accomplished using this example providing that sda is the first bootable disk device: &prompt.root; dd if=/root/disk.img of=/dev/sda bs=1M If all went well, the image should now be in the MBR of the first device and the machine can be rebooted. Watch for the machine to boot up properly with the &man.ping.8; tool. Once it has came back on-line, it should be possible to access it over &man.ssh.1; as user root with the configured password. Installation of The &os; Operating System The mfsBSD has been successfully booted and it should be possible to log in through &man.ssh.1;. This section will describe how to create and label slices, set up gmirror for RAID-1, and how to use sysinstall to install a minimal distribution of the &os; operating system. Preparation of Hard Drives The first task is to allocate disk space for &os;, i.e.: to create slices and partitions. Obviously, the currently running system is fully loaded in system memory and therefore there will be no problems with manipulating hard drives. To complete this task, it is possible to use either sysinstall or &man.fdisk.8; in conjunction to &man.bsdlabel.8;. At the start, mark all system disks as empty. Repeat the following command for each hard drive: &prompt.root; dd if=/dev/zero of=/dev/ad0 count=2 Next, create slices and label them with your preferred tool. While it is considered easier to use sysinstall, a powerful and also probably less buggy method will be to use standard text-based &unix; tools, such as &man.fdisk.8; and &man.bsdlabel.8;, which will also be covered in this section. The former option is well documented in the Installing &os; chapter of the &os; Handbook. As it was mentioned in the introduction, this article will present how to set up a system with RAID-1 and ZFS capabilities. Our set up will consist of a small &man.gmirror.8; mirrored / (root), /usr and /var file systems, and the rest of the disk space will be allocated for a &man.zpool.8; mirrored ZFS file system. Please note, that the ZFS file system will be configured after the &os; operating system is successfully installed and booted. The following example will describe how to create slices and labels, initialize &man.gmirror.8; on each partition and how to create a UFS2 file system in each mirrored partition: &prompt.root; fdisk -BI /dev/ad0 &prompt.root; fdisk -BI /dev/ad1 &prompt.root; bsdlabel -wB /dev/ad0s1 &prompt.root; bsdlabel -wB /dev/ad1s1 &prompt.root; bsdlabel -e /dev/ad0s1 -&prompt.root; bsdlabel /dev/ad0s1 > /tmp/bsdlabel.txt && bsdlabel -R /tmp/bsdlabel.txt +&prompt.root; bsdlabel /dev/ad0s1 > /tmp/bsdlabel.txt && bsdlabel -R /dev/ad1s1 /tmp/bsdlabel.txt &prompt.root; gmirror label root /dev/ad[01]s1a &prompt.root; gmirror label var /dev/ad[01]s1d &prompt.root; gmirror label usr /dev/ad[01]s1e &prompt.root; gmirror label -F swap /dev/ad[01]s1b &prompt.root; newfs /dev/mirror/root &prompt.root; newfs /dev/mirror/var &prompt.root; newfs /dev/mirror/usr Create a slice covering the entire disk and initialize the boot code contained in sector 0 of the given disk. Repeat this command for all hard drives in the system. Write a standard label for each disk including the bootstrap code. Now, manually edit the label of the given disk. Refer to the &man.bsdlabel.8; manual page in order to find out how to create partitions. Create partitions a for / (root) file system, b for swap, d for /var, e for /usr and finally f which will later be used for ZFS. Import the recently created label for the second hard drive, so both hard drives will be labeled in the same way. Initialize &man.gmirror.8; on each partition. Note the option used for swap partition. This instructs &man.gmirror.8; to assume that the device is in the consistent state after the power/system failure. Create a UFS2 file system on each mirrored partition. System Installation This is the most important part. This section will describe how to actually install the minimal distribution of &os; on the hard drives that we have prepared in the previous section. To accomplish this goal, all file systems need to be mounted so sysinstall may write the contents of &os; to the hard drives: &prompt.root; mount /dev/mirror/root /mnt &prompt.root; mkdir /mnt/var /mnt/usr &prompt.root; mount /dev/mirror/var /mnt/var &prompt.root; mount /dev/mirror/usr /mnt/usr When you are done, start &man.sysinstall.8;. Select the Custom installation from the main menu. Select Options and press Enter. With the help of arrow keys, move the cursor on the Install Root item, press Space and change it to /mnt. Press Enter to submit your changes and exit the Options menu by pressing q. Note that this step is very important and if skipped, sysinstall will be unable to install &os;. Go to the Distributions menu, move the cursor with the arrow keys on the option, and check it by pressing Space. This article uses the Minimal distribution in order to save network traffic, because the system itself will be installed over ftp. Exit this menu by choosing option. The Partition and Label menus will be skipped, as these are useless now. In the Media menu, select . Select the nearest mirror and let sysinstall assume that the network is already configured. You will be returned back to the Custom menu. Finally, perform the system installation by selecting the latest available option - Commit. Exit sysinstall when it finishes the installation. Post Installation Steps The &os; operating system should be installed now; however, the process is not finished yet. It is necessary to perform some post installation steps in order to allow &os; to boot in the future and to be able to log in to the system. You must now &man.chroot.8; into the freshly installed system in order to finish the installation. Use the following command: &prompt.root; chroot /mnt To complete our goal, perform these steps: Copy the GENERIC kernel to the /boot/kernel directory: &prompt.root; cp -Rp /boot/GENERIC/* /boot/kernel Create the /etc/rc.conf, /etc/resolv.conf and /etc/fstab files. Do not forget to properly set the network information and to enable sshd in the /etc/rc.conf file. The contents of the /etc/fstab file will be similar to the following: # Device Mountpoint FStype Options Dump Pass# /dev/mirror/swap none swap sw 0 0 /dev/mirror/root / ufs rw 1 1 /dev/mirror/usr /usr ufs rw 2 2 /dev/mirror/var /var ufs rw 2 2 /dev/cd0 /cdrom cd9660 ro,noauto 0 0 Create the /boot/loader.conf file, with the following contents: geom_mirror_load="YES" zfs_load="YES" Perform the following command, which will make ZFS available on the next boot: &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf Add additional users to the system using the &man.adduser.8; tool. Do not forget to add a user to the wheel group so you may obtain root access after the reboot. Double-check all your settings. The system should now be ready for the next boot. Use the &man.reboot.8; command to reboot your system. ZFS If your system survived the reboot, it should now be possible to log in. Welcome to the fresh &os; installation, performed remotely without the use of a remote console! The only remaining step is to configure &man.zpool.8; and create some &man.zfs.8; file systems. Creating and administering ZFS is very straightforward. First, create a mirrored pool: &prompt.root; zpool create tank mirror /dev/ad[01]s1f Next, create some file systems: &prompt.root; zfs create tank/ports &prompt.root; zfs create tank/src &prompt.root; zfs set compression=gzip tank/ports &prompt.root; zfs set compression=on tank/src &prompt.root; zfs set mountpoint=/usr/ports tank/ports &prompt.root; zfs set mountpoint=/usr/src tank/src That's all. If you are interested in more details about ZFS on &os;, please refer to the ZFS section of the &os; Wiki.