Index: head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 53261) +++ head/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml (revision 53262) @@ -1,4719 +1,4719 @@ Network Servers Synopsis This chapter covers some of the more frequently used network services on &unix; systems. This includes installing, configuring, testing, and maintaining many different types of network services. Example configuration files are included throughout this chapter for reference. By the end of this chapter, readers will know: How to manage the inetd daemon. How to set up the Network File System (NFS). How to set up the Network Information Server (NIS) for centralizing and sharing user accounts. How to set &os; up to act as an LDAP server or client How to set up automatic network settings using DHCP. How to set up a Domain Name Server (DNS). How to set up the Apache HTTP Server. How to set up a File Transfer Protocol (FTP) server. How to set up a file and print server for &windows; clients using Samba. How to synchronize the time and date, and set up a time server using the Network Time Protocol (NTP). How to set up iSCSI. This chapter assumes a basic knowledge of: /etc/rc scripts. Network terminology. Installation of additional third-party software (). The <application>inetd</application> Super-Server The &man.inetd.8; daemon is sometimes referred to as a Super-Server because it manages connections for many services. Instead of starting multiple applications, only the inetd service needs to be started. When a connection is received for a service that is managed by inetd, it determines which program the connection is destined for, spawns a process for that program, and delegates the program a socket. Using inetd for services that are not heavily used can reduce system load, when compared to running each daemon individually in stand-alone mode. Primarily, inetd is used to spawn other daemons, but several trivial protocols are handled internally, such as chargen, auth, time, echo, discard, and daytime. This section covers the basics of configuring inetd. Configuration File Configuration of inetd is done by editing /etc/inetd.conf. Each line of this configuration file represents an application which can be started by inetd. By default, every line starts with a comment (#), meaning that inetd is not listening for any applications. To configure inetd to listen for an application's connections, remove the # at the beginning of the line for that application. After saving your edits, configure inetd to start at system boot by editing /etc/rc.conf: inetd_enable="YES" To start inetd now, so that it listens for the service you configured, type: &prompt.root; service inetd start Once inetd is started, it needs to be notified whenever a modification is made to /etc/inetd.conf: Reloading the <application>inetd</application> Configuration File &prompt.root; service inetd reload Typically, the default entry for an application does not need to be edited beyond removing the #. In some situations, it may be appropriate to edit the default entry. As an example, this is the default entry for &man.ftpd.8; over IPv4: ftp stream tcp nowait root /usr/libexec/ftpd ftpd -l The seven columns in an entry are as follows: service-name socket-type protocol {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] user[:group][/login-class] server-program server-program-arguments where: service-name The service name of the daemon to start. It must correspond to a service listed in /etc/services. This determines which port inetd listens on for incoming connections to that service. When using a custom service, it must first be added to /etc/services. socket-type Either stream, dgram, raw, or seqpacket. Use stream for TCP connections and dgram for UDP services. protocol Use one of the following protocol names: Protocol Name Explanation tcp or tcp4 TCP IPv4 udp or udp4 UDP IPv4 tcp6 TCP IPv6 udp6 UDP IPv6 tcp46 Both TCP IPv4 and IPv6 udp46 Both UDP IPv4 and IPv6 {wait|nowait}[/max-child[/max-connections-per-ip-per-minute[/max-child-per-ip]]] In this field, or must be specified. , and are optional. indicates whether or not the service is able to handle its own socket. socket types must use while daemons, which are usually multi-threaded, should use . usually hands off multiple sockets to a single daemon, while spawns a child daemon for each new socket. The maximum number of child daemons inetd may spawn is set by . For example, to limit ten instances of the daemon, place a /10 after . Specifying /0 allows an unlimited number of children. limits the number of connections from any particular IP address per minute. Once the limit is reached, further connections from this IP address will be dropped until the end of the minute. For example, a value of /10 would limit any particular IP address to ten connection attempts per minute. limits the number of child processes that can be started on behalf on any single IP address at any moment. These options can limit excessive resource consumption and help to prevent Denial of Service attacks. An example can be seen in the default settings for &man.fingerd.8;: finger stream tcp nowait/3/10 nobody /usr/libexec/fingerd fingerd -k -s user The username the daemon will run as. Daemons typically run as root, daemon, or nobody. server-program The full path to the daemon. If the daemon is a service provided by inetd internally, use . server-program-arguments Used to specify any command arguments to be passed to the daemon on invocation. If the daemon is an internal service, use . Command-Line Options Like most server daemons, inetd has a number of options that can be used to modify its behavior. By default, inetd is started with -wW -C 60. These options enable TCP wrappers for all services, including internal services, and prevent any IP address from requesting any service more than 60 times per minute. To change the default options which are passed to inetd, add an entry for inetd_flags in /etc/rc.conf. If inetd is already running, restart it with service inetd restart. The available rate limiting options are: -c maximum Specify the default maximum number of simultaneous invocations of each service, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. -C rate Specify the default maximum number of times a service can be invoked from a single IP address per minute. May be overridden on a per-service basis by using in /etc/inetd.conf. -R rate Specify the maximum number of times a service can be invoked in one minute, where the default is 256. A rate of 0 allows an unlimited number. -s maximum Specify the maximum number of times a service can be invoked from a single IP address at any one time, where the default is unlimited. May be overridden on a per-service basis by using in /etc/inetd.conf. Additional options are available. Refer to &man.inetd.8; for the full list of options. Security Considerations Many of the daemons which can be managed by inetd are not security-conscious. Some daemons, such as fingerd, can provide information that may be useful to an attacker. Only enable the services which are needed and monitor the system for excessive connection attempts. max-connections-per-ip-per-minute, max-child and max-child-per-ip can be used to limit such attacks. By default, TCP wrappers is enabled. Consult &man.hosts.access.5; for more information on placing TCP restrictions on various inetd invoked daemons. Network File System (NFS) Tom Rhodes Reorganized and enhanced by Bill Swingle Written by NFS &os; supports the Network File System (NFS), which allows a server to share directories and files with clients over a network. With NFS, users and programs can access files on remote systems as if they were stored locally. NFS has many practical uses. Some of the more common uses include: Data that would otherwise be duplicated on each client can be kept in a single location and accessed by clients on the network. Several clients may need access to the /usr/ports/distfiles directory. Sharing that directory allows for quick access to the source files without having to download them to each client. On large networks, it is often more convenient to configure a central NFS server on which all user home directories are stored. Users can log into a client anywhere on the network and have access to their home directories. Administration of NFS exports is simplified. For example, there is only one file system where security or backup policies must be set. Removable media storage devices can be used by other machines on the network. This reduces the number of devices throughout the network and provides a centralized location to manage their security. It is often more convenient to install software on multiple machines from a centralized installation media. NFS consists of a server and one or more clients. The client remotely accesses the data that is stored on the server machine. In order for this to function properly, a few processes have to be configured and running. These daemons must be running on the server: NFS server file server UNIX clients rpcbind mountd nfsd Daemon Description nfsd The NFS daemon which services requests from NFS clients. mountd The NFS mount daemon which carries out requests received from nfsd. rpcbind This daemon allows NFS clients to discover which port the NFS server is using. Running &man.nfsiod.8; on the client can improve performance, but is not required. Configuring the Server NFS configuration The file systems which the NFS server will share are specified in /etc/exports. Each line in this file specifies a file system to be exported, which clients have access to that file system, and any access options. When adding entries to this file, each exported file system, its properties, and allowed hosts must occur on a single line. If no clients are listed in the entry, then any client on the network can mount that file system. NFS export examples The following /etc/exports entries demonstrate how to export file systems. The examples can be modified to match the file systems and client names on the reader's network. There are many options that can be used in this file, but only a few will be mentioned here. See &man.exports.5; for the full list of options. This example shows how to export /cdrom to three hosts named alpha, bravo, and charlie: /cdrom -ro alpha bravo charlie The -ro flag makes the file system read-only, preventing clients from making any changes to the exported file system. This example assumes that the host names are either in DNS or in /etc/hosts. Refer to &man.hosts.5; if the network does not have a DNS server. The next example exports /home to three clients by IP address. This can be useful for networks without DNS or /etc/hosts entries. The -alldirs flag allows subdirectories to be mount points. In other words, it will not automatically mount the subdirectories, but will permit the client to mount the directories that are required as needed. /usr/home -alldirs 10.0.0.2 10.0.0.3 10.0.0.4 This next example exports /a so that two clients from different domains may access that file system. The allows root on the remote system to write data on the exported file system as root. If -maproot=root is not specified, the client's root user will be mapped to the server's nobody account and will be subject to the access limitations defined for nobody. /a -maproot=root host.example.com box.example.org A client can only be specified once per file system. For example, if /usr is a single file system, these entries would be invalid as both entries specify the same host: # Invalid when /usr is one file system /usr/src client /usr/ports client The correct format for this situation is to use one entry: /usr/src /usr/ports client The following is an example of a valid export list, where /usr and /exports are local file systems: # Export src and ports to client01 and client02, but only # client01 has root privileges on it /usr/src /usr/ports -maproot=root client01 /usr/src /usr/ports client02 # The client machines have root and can mount anywhere # on /exports. Anyone in the world can mount /exports/obj read-only /exports -alldirs -maproot=root client01 client02 /exports/obj -ro To enable the processes required by the NFS server at boot time, add these options to /etc/rc.conf: rpcbind_enable="YES" nfs_server_enable="YES" mountd_flags="-r" The server can be started now by running this command: &prompt.root; service nfsd start Whenever the NFS server is started, mountd also starts automatically. However, mountd only reads /etc/exports when it is started. To make subsequent /etc/exports edits take effect immediately, force mountd to reread it: &prompt.root; service mountd reload Configuring the Client To enable NFS clients, set this option in each client's /etc/rc.conf: nfs_client_enable="YES" Then, run this command on each NFS client: &prompt.root; service nfsclient start The client now has everything it needs to mount a remote file system. In these examples, the server's name is server and the client's name is client. To mount /home on server to the /mnt mount point on client: NFS mounting &prompt.root; mount server:/home /mnt The files and directories in /home will now be available on client, in the /mnt directory. To mount a remote file system each time the client boots, add it to /etc/fstab: server:/home /mnt nfs rw 0 0 Refer to &man.fstab.5; for a description of all available options. Locking Some applications require file locking to operate correctly. To enable locking, add these lines to /etc/rc.conf on both the client and server: rpc_lockd_enable="YES" rpc_statd_enable="YES" Then start the applications: &prompt.root; service lockd start &prompt.root; service statd start If locking is not required on the server, the NFS client can be configured to lock locally by including when running mount. Refer to &man.mount.nfs.8; for further details. Automating Mounts with &man.amd.8; Wylie Stilwell Contributed by Chern Lee Rewritten by amd automatic mounter daemon The automatic mounter daemon, amd, automatically mounts a remote file system whenever a file or directory within that file system is accessed. File systems that are inactive for a period of time will be automatically unmounted by amd. This daemon provides an alternative to modifying /etc/fstab to list every client. It operates by attaching itself as an NFS server to the /host and /net directories. When a file is accessed within one of these directories, amd looks up the corresponding remote mount and automatically mounts it. /net is used to mount an exported file system from an IP address while /host is used to mount an export from a remote hostname. For instance, an attempt to access a file within /host/foobar/usr would tell amd to mount the /usr export on the host foobar. Mounting an Export with <application>amd</application> In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /host/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, amd intercepts the request and attempts to resolve the hostname foobar. If successful, amd automatically mounts the desired export. To enable amd at boot time, add this line to /etc/rc.conf: amd_enable="YES" To start amd now: &prompt.root; service amd start Custom flags can be passed to amd from the amd_flags environment variable. By default, amd_flags is set to: amd_flags="-a /.amd_mnt -l syslog /host /etc/amd.map /net /etc/amd.map" The default options with which exports are mounted are defined in /etc/amd.map. Some of the more advanced features of amd are defined in /etc/amd.conf. Consult &man.amd.8; and &man.amd.conf.5; for more information. Automating Mounts with &man.autofs.5; The &man.autofs.5; automount facility is supported starting with &os; 10.1-RELEASE. To use the automounter functionality in older versions of &os;, use &man.amd.8; instead. This chapter only describes the &man.autofs.5; automounter. autofs automounter subsystem The &man.autofs.5; facility is a common name for several components that, together, allow for automatic mounting of remote and local filesystems whenever a file or directory within that file system is accessed. It consists of the kernel component, &man.autofs.5;, and several userspace applications: &man.automount.8;, &man.automountd.8; and &man.autounmountd.8;. It serves as an alternative for &man.amd.8; from previous &os; releases. Amd is still provided for backward compatibility purposes, as the two use different map format; the one used by autofs is the same as with other SVR4 automounters, such as the ones in Solaris, MacOS X, and Linux. The &man.autofs.5; virtual filesystem is mounted on specified mountpoints by &man.automount.8;, usually invoked during boot. Whenever a process attempts to access file within the &man.autofs.5; mountpoint, the kernel will notify &man.automountd.8; daemon and pause the triggering process. The &man.automountd.8; daemon will handle kernel requests by finding the proper map and mounting the filesystem according to it, then signal the kernel to release blocked process. The &man.autounmountd.8; daemon automatically unmounts automounted filesystems after some time, unless they are still being used. The primary autofs configuration file is /etc/auto_master. It assigns individual maps to top-level mounts. For an explanation of auto_master and the map syntax, refer to &man.auto.master.5;. There is a special automounter map mounted on /net. When a file is accessed within this directory, &man.autofs.5; looks up the corresponding remote mount and automatically mounts it. For instance, an attempt to access a file within /net/foobar/usr would tell &man.automountd.8; to mount the /usr export from the host foobar. Mounting an Export with &man.autofs.5; In this example, showmount -e shows the exported file systems that can be mounted from the NFS server, foobar: &prompt.user; showmount -e foobar Exports list on foobar: /usr 10.10.10.0 /a 10.10.10.0 &prompt.user; cd /net/foobar/usr The output from showmount shows /usr as an export. When changing directories to /host/foobar/usr, &man.automountd.8; intercepts the request and attempts to resolve the hostname foobar. If successful, &man.automountd.8; automatically mounts the source export. To enable &man.autofs.5; at boot time, add this line to /etc/rc.conf: autofs_enable="YES" Then &man.autofs.5; can be started by running: &prompt.root; service automount start &prompt.root; service automountd start &prompt.root; service autounmountd start The &man.autofs.5; map format is the same as in other operating systems. Information about this format from other sources can be useful, like the Mac OS X document. Consult the &man.automount.8;, &man.automountd.8;, &man.autounmountd.8;, and &man.auto.master.5; manual pages for more information. Network Information System (<acronym>NIS</acronym>) NIS Solaris HP-UX AIX Linux NetBSD OpenBSD yellow pages NIS Network Information System (NIS) is designed to centralize administration of &unix;-like systems such as &solaris;, HP-UX, &aix;, Linux, NetBSD, OpenBSD, and &os;. NIS was originally known as Yellow Pages but the name was changed due to trademark issues. This is the reason why NIS commands begin with yp. NIS domains NIS is a Remote Procedure Call (RPC)-based client/server system that allows a group of machines within an NIS domain to share a common set of configuration files. This permits a system administrator to set up NIS client systems with only minimal configuration data and to add, remove, or modify configuration data from a single location. &os; uses version 2 of the NIS protocol. <acronym>NIS</acronym> Terms and Processes Table 28.1 summarizes the terms and important processes used by NIS: rpcbind portmap <acronym>NIS</acronym> Terminology Term Description NIS domain name NIS servers and clients share an NIS domain name. Typically, this name does not have anything to do with DNS. &man.rpcbind.8; This service enables RPC and must be running in order to run an NIS server or act as an NIS client. &man.ypbind.8; This service binds an NIS client to its NIS server. It will take the NIS domain name and use RPC to connect to the server. It is the core of client/server communication in an NIS environment. If this service is not running on a client machine, it will not be able to access the NIS server. &man.ypserv.8; This is the process for the NIS server. If this service stops running, the server will no longer be able to respond to NIS requests so hopefully, there is a slave server to take over. Some non-&os; clients will not try to reconnect using a slave server and the ypbind process may need to be restarted on these clients. &man.rpc.yppasswdd.8; This process only runs on NIS master servers. This daemon allows NIS clients to change their NIS passwords. If this daemon is not running, users will have to login to the NIS master server and change their passwords there.
Machine Types NIS master server NIS slave server NIS client There are three types of hosts in an NIS environment: NIS master server This server acts as a central repository for host configuration information and maintains the authoritative copy of the files used by all of the NIS clients. The passwd, group, and other various files used by NIS clients are stored on the master server. While it is possible for one machine to be an NIS master server for more than one NIS domain, this type of configuration will not be covered in this chapter as it assumes a relatively small-scale NIS environment. NIS slave servers NIS slave servers maintain copies of the NIS master's data files in order to provide redundancy. Slave servers also help to balance the load of the master server as NIS clients always attach to the NIS server which responds first. NIS clients NIS clients authenticate against the NIS server during log on. Information in many files can be shared using NIS. The master.passwd, group, and hosts files are commonly shared via NIS. Whenever a process on a client needs information that would normally be found in these files locally, it makes a query to the NIS server that it is bound to instead. Planning Considerations This section describes a sample NIS environment which consists of 15 &os; machines with no centralized point of administration. Each machine has its own /etc/passwd and /etc/master.passwd. These files are kept in sync with each other only through manual intervention. Currently, when a user is added to the lab, the process must be repeated on all 15 machines. The configuration of the lab will be as follows: Machine name IP address Machine role ellington 10.0.0.2 NIS master coltrane 10.0.0.3 NIS slave basie 10.0.0.4 Faculty workstation bird 10.0.0.5 Client machine cli[1-11] 10.0.0.[6-17] Other client machines If this is the first time an NIS scheme is being developed, it should be thoroughly planned ahead of time. Regardless of network size, several decisions need to be made as part of the planning process. Choosing a <acronym>NIS</acronym> Domain Name NIS domain name When a client broadcasts its requests for info, it includes the name of the NIS domain that it is part of. This is how multiple servers on one network can tell which server should answer which request. Think of the NIS domain name as the name for a group of hosts. Some organizations choose to use their Internet domain name for their NIS domain name. This is not recommended as it can cause confusion when trying to debug network problems. The NIS domain name should be unique within the network and it is helpful if it describes the group of machines it represents. For example, the Art department at Acme Inc. might be in the acme-art NIS domain. This example will use the domain name test-domain. However, some non-&os; operating systems require the NIS domain name to be the same as the Internet domain name. If one or more machines on the network have this restriction, the Internet domain name must be used as the NIS domain name. Physical Server Requirements There are several things to keep in mind when choosing a machine to use as a NIS server. Since NIS clients depend upon the availability of the server, choose a machine that is not rebooted frequently. The NIS server should ideally be a stand alone machine whose sole purpose is to be an NIS server. If the network is not heavily used, it is acceptable to put the NIS server on a machine running other services. However, if the NIS server becomes unavailable, it will adversely affect all NIS clients. Configuring the <acronym>NIS</acronym> Master Server The canonical copies of all NIS files are stored on the master server. The databases used to store the information are called NIS maps. In &os;, these maps are stored in /var/yp/[domainname] where [domainname] is the name of the NIS domain. Since multiple domains are supported, it is possible to have several directories, one for each domain. Each domain will have its own independent set of maps. NIS master and slave servers handle all NIS requests through &man.ypserv.8;. This daemon is responsible for receiving incoming requests from NIS clients, translating the requested domain and map name to a path to the corresponding database file, and transmitting data from the database back to the client. NIS server configuration Setting up a master NIS server can be relatively straight forward, depending on environmental needs. Since &os; provides built-in NIS support, it only needs to be enabled by adding the following lines to /etc/rc.conf: nisdomainname="test-domain" nis_server_enable="YES" nis_yppasswdd_enable="YES" This line sets the NIS domain name to test-domain. This automates the start up of the NIS server processes when the system boots. This enables the &man.rpc.yppasswdd.8; daemon so that users can change their NIS password from a client machine. Care must be taken in a multi-server domain where the server machines are also NIS clients. It is generally a good idea to force the servers to bind to themselves rather than allowing them to broadcast bind requests and possibly become bound to each other. Strange failure modes can result if one server goes down and others are dependent upon it. Eventually, all the clients will time out and attempt to bind to other servers, but the delay involved can be considerable and the failure mode is still present since the servers might bind to each other all over again. A server that is also a client can be forced to bind to a particular server by adding these additional lines to /etc/rc.conf: nis_client_enable="YES" # run client stuff as well nis_client_flags="-S NIS domain,server" After saving the edits, type /etc/netstart to restart the network and apply the values defined in /etc/rc.conf. Before initializing the NIS maps, start &man.ypserv.8;: &prompt.root; service ypserv start Initializing the <acronym>NIS</acronym> Maps NIS maps NIS maps are generated from the configuration files in /etc on the NIS master, with one exception: /etc/master.passwd. This is to prevent the propagation of passwords to all the servers in the NIS domain. Therefore, before the NIS maps are initialized, configure the primary password files: &prompt.root; cp /etc/master.passwd /var/yp/master.passwd &prompt.root; cd /var/yp &prompt.root; vi master.passwd It is advisable to remove all entries for system accounts as well as any user accounts that do not need to be propagated to the NIS clients, such as the root and any other administrative accounts. Ensure that the /var/yp/master.passwd is neither group or world readable by setting its permissions to 600. After completing this task, initialize the NIS maps. &os; includes the &man.ypinit.8; script to do this. When generating maps for the master server, include and specify the NIS domain name: ellington&prompt.root; ypinit -m test-domain Server Type: MASTER Domain: test-domain Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. At this point, we have to construct a list of this domains YP servers. rod.darktech.org is already known as master server. Please continue to add any slave servers, one per line. When you are done with the list, type a <control D>. master server : ellington next host to add: coltrane next host to add: ^D The current list of NIS servers looks like this: ellington coltrane Is this correct? [y/n: y] y [..output from map generation..] NIS Map update completed. ellington has been setup as an YP master server without any errors. This will create /var/yp/Makefile from /var/yp/Makefile.dist. By default, this file assumes that the environment has a single NIS server with only &os; clients. Since test-domain has a slave server, edit this line in /var/yp/Makefile so that it begins with a comment (#): NOPUSH = "True" Adding New Users Every time a new user is created, the user account must be added to the master NIS server and the NIS maps rebuilt. Until this occurs, the new user will not be able to login anywhere except on the NIS master. For example, to add the new user jsmith to the test-domain domain, run these commands on the master server: &prompt.root; pw useradd jsmith &prompt.root; cd /var/yp &prompt.root; make test-domain The user could also be added using adduser jsmith instead of pw useradd smith. Setting up a <acronym>NIS</acronym> Slave Server NIS slave server To set up an NIS slave server, log on to the slave server and edit /etc/rc.conf as for the master server. Do not generate any NIS maps, as these already exist on the master server. When running ypinit on the slave server, use (for slave) instead of (for master). This option requires the name of the NIS master in addition to the domain name, as seen in this example: coltrane&prompt.root; ypinit -s ellington test-domain Server Type: SLAVE Domain: test-domain Master: ellington Creating an YP server will require that you answer a few questions. Questions will all be asked at the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n] n Ok, please remember to go back and redo manually whatever fails. If not, something might not work. There will be no further questions. The remainder of the procedure should take a few minutes, to copy the databases from ellington. Transferring netgroup... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byuser... ypxfr: Exiting: Map successfully transferred Transferring netgroup.byhost... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byuid... ypxfr: Exiting: Map successfully transferred Transferring passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring group.bygid... ypxfr: Exiting: Map successfully transferred Transferring group.byname... ypxfr: Exiting: Map successfully transferred Transferring services.byname... ypxfr: Exiting: Map successfully transferred Transferring rpc.bynumber... ypxfr: Exiting: Map successfully transferred Transferring rpc.byname... ypxfr: Exiting: Map successfully transferred Transferring protocols.byname... ypxfr: Exiting: Map successfully transferred Transferring master.passwd.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byname... ypxfr: Exiting: Map successfully transferred Transferring networks.byaddr... ypxfr: Exiting: Map successfully transferred Transferring netid.byname... ypxfr: Exiting: Map successfully transferred Transferring hosts.byaddr... ypxfr: Exiting: Map successfully transferred Transferring protocols.bynumber... ypxfr: Exiting: Map successfully transferred Transferring ypservers... ypxfr: Exiting: Map successfully transferred Transferring hosts.byname... ypxfr: Exiting: Map successfully transferred coltrane has been setup as an YP slave server without any errors. Remember to update map ypservers on ellington. This will generate a directory on the slave server called /var/yp/test-domain which contains copies of the NIS master server's maps. Adding these /etc/crontab entries on each slave server will force the slaves to sync their maps with the maps on the master server: 20 * * * * root /usr/libexec/ypxfr passwd.byname 21 * * * * root /usr/libexec/ypxfr passwd.byuid These entries are not mandatory because the master server automatically attempts to push any map changes to its slaves. However, since clients may depend upon the slave server to provide correct password information, it is recommended to force frequent password map updates. This is especially important on busy networks where map updates might not always complete. To finish the configuration, run /etc/netstart on the slave server in order to start the NIS services. Setting Up an <acronym>NIS</acronym> Client An NIS client binds to an NIS server using &man.ypbind.8;. This daemon broadcasts RPC requests on the local network. These requests specify the domain name configured on the client. If an NIS server in the same domain receives one of the broadcasts, it will respond to ypbind, which will record the server's address. If there are several servers available, the client will use the address of the first server to respond and will direct all of its NIS requests to that server. The client will automatically ping the server on a regular basis to make sure it is still available. If it fails to receive a reply within a reasonable amount of time, ypbind will mark the domain as unbound and begin broadcasting again in the hopes of locating another server. NIS client configuration To configure a &os; machine to be an NIS client: Edit /etc/rc.conf and add the following lines in order to set the NIS domain name and start &man.ypbind.8; during network startup: nisdomainname="test-domain" nis_client_enable="YES" To import all possible password entries from the NIS server, use vipw to remove all user accounts except one from /etc/master.passwd. When removing the accounts, keep in mind that at least one local account should remain and this account should be a member of wheel. If there is a problem with NIS, this local account can be used to log in remotely, become the superuser, and fix the problem. Before saving the edits, add the following line to the end of the file: +::::::::: This line configures the client to provide anyone with a valid account in the NIS server's password maps an account on the client. There are many ways to configure the NIS client by modifying this line. One method is described in . For more detailed reading, refer to the book Managing NFS and NIS, published by O'Reilly Media. To import all possible group entries from the NIS server, add this line to /etc/group: +:*:: To start the NIS client immediately, execute the following commands as the superuser: &prompt.root; /etc/netstart &prompt.root; service ypbind start After completing these steps, running ypcat passwd on the client should show the server's passwd map. <acronym>NIS</acronym> Security Since RPC is a broadcast-based service, any system running ypbind within the same domain can retrieve the contents of the NIS maps. To prevent unauthorized transactions, &man.ypserv.8; supports a feature called securenets which can be used to restrict access to a given set of hosts. By default, this information is stored in /var/yp/securenets, unless &man.ypserv.8; is started with and an alternate path. This file contains entries that consist of a network specification and a network mask separated by white space. Lines starting with # are considered to be comments. A sample securenets might look like this: # allow connections from local host -- mandatory 127.0.0.1 255.255.255.255 # allow connections from any host # on the 192.168.128.0 network 192.168.128.0 255.255.255.0 # allow connections from any host # between 10.0.0.0 to 10.0.15.255 # this includes the machines in the testlab 10.0.0.0 255.255.240.0 If &man.ypserv.8; receives a request from an address that matches one of these rules, it will process the request normally. If the address fails to match a rule, the request will be ignored and a warning message will be logged. If the securenets does not exist, ypserv will allow connections from any host. is an alternate mechanism for providing access control instead of securenets. While either access control mechanism adds some security, they are both vulnerable to IP spoofing attacks. All NIS-related traffic should be blocked at the firewall. Servers using securenets may fail to serve legitimate NIS clients with archaic TCP/IP implementations. Some of these implementations set all host bits to zero when doing broadcasts or fail to observe the subnet mask when calculating the broadcast address. While some of these problems can be fixed by changing the client configuration, other problems may force the retirement of these client systems or the abandonment of securenets. TCP Wrapper The use of TCP Wrapper increases the latency of the NIS server. The additional delay may be long enough to cause timeouts in client programs, especially in busy networks with slow NIS servers. If one or more clients suffer from latency, convert those clients into NIS slave servers and force them to bind to themselves. Barring Some Users In this example, the basie system is a faculty workstation within the NIS domain. The passwd map on the master NIS server contains accounts for both faculty and students. This section demonstrates how to allow faculty logins on this system while refusing student logins. To prevent specified users from logging on to a system, even if they are present in the NIS database, use vipw to add -username with the correct number of colons towards the end of /etc/master.passwd on the client, where username is the username of a user to bar from logging in. The line with the blocked user must be before the + line that allows NIS users. In this example, bill is barred from logging on to basie: basie&prompt.root; cat /etc/master.passwd root:[password]:0:0::0:0:The super-user:/root:/bin/csh toor:[password]:0:0::0:0:The other super-user:/root:/bin/sh -daemon:*:1:1::0:0:Owner of many system processes:/root:/sbin/nologin -operator:*:2:5::0:0:System &:/:/sbin/nologin -bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/sbin/nologin -tty:*:4:65533::0:0:Tty Sandbox:/:/sbin/nologin -kmem:*:5:65533::0:0:KMem Sandbox:/:/sbin/nologin -games:*:7:13::0:0:Games pseudo-user:/usr/games:/sbin/nologin -news:*:8:8::0:0:News Subsystem:/:/sbin/nologin -man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/sbin/nologin -bind:*:53:53::0:0:Bind Sandbox:/:/sbin/nologin +daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin +operator:*:2:5::0:0:System &:/:/usr/sbin/nologin +bin:*:3:7::0:0:Binaries Commands and Source,,,:/:/usr/sbin/nologin +tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin +kmem:*:5:65533::0:0:KMem Sandbox:/:/usr/sbin/nologin +games:*:7:13::0:0:Games pseudo-user:/usr/games:/usr/sbin/nologin +news:*:8:8::0:0:News Subsystem:/:/usr/sbin/nologin +man:*:9:9::0:0:Mister Man Pages:/usr/share/man:/usr/sbin/nologin +bind:*:53:53::0:0:Bind Sandbox:/:/usr/sbin/nologin uucp:*:66:66::0:0:UUCP pseudo-user:/var/spool/uucppublic:/usr/libexec/uucp/uucico -xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/sbin/nologin -pop:*:68:6::0:0:Post Office Owner:/nonexistent:/sbin/nologin -nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/sbin/nologin +xten:*:67:67::0:0:X-10 daemon:/usr/local/xten:/usr/sbin/nologin +pop:*:68:6::0:0:Post Office Owner:/nonexistent:/usr/sbin/nologin +nobody:*:65534:65534::0:0:Unprivileged user:/nonexistent:/usr/sbin/nologin -bill::::::::: +::::::::: basie&prompt.root; Using Netgroups netgroups Barring specified users from logging on to individual systems becomes unscaleable on larger networks and quickly loses the main benefit of NIS: centralized administration. Netgroups were developed to handle large, complex networks with hundreds of users and machines. Their use is comparable to &unix; groups, where the main difference is the lack of a numeric ID and the ability to define a netgroup by including both user accounts and other netgroups. To expand on the example used in this chapter, the NIS domain will be extended to add the users and systems shown in Tables 28.2 and 28.3: Additional Users User Name(s) Description alpha, beta IT department employees charlie, delta IT department apprentices echo, foxtrott, golf, ... employees able, baker, ... interns
Additional Systems Machine Name(s) Description war, death, famine, pollution Only IT employees are allowed to log onto these servers. pride, greed, envy, wrath, lust, sloth All members of the IT department are allowed to login onto these servers. one, two, three, four, ... Ordinary workstations used by employees. trashcan A very old machine without any critical data. Even interns are allowed to use this system.
When using netgroups to configure this scenario, each user is assigned to one or more netgroups and logins are then allowed or forbidden for all members of the netgroup. When adding a new machine, login restrictions must be defined for all netgroups. When a new user is added, the account must be added to one or more netgroups. If the NIS setup is planned carefully, only one central configuration file needs modification to grant or deny access to machines. The first step is the initialization of the NIS netgroup map. In &os;, this map is not created by default. On the NIS master server, use an editor to create a map named /var/yp/netgroup. This example creates four netgroups to represent IT employees, IT apprentices, employees, and interns: IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) USERS (,echo,test-domain) (,foxtrott,test-domain) \ (,golf,test-domain) INTERNS (,able,test-domain) (,baker,test-domain) Each entry configures a netgroup. The first column in an entry is the name of the netgroup. Each set of brackets represents either a group of one or more users or the name of another netgroup. When specifying a user, the three comma-delimited fields inside each group represent: The name of the host(s) where the other fields representing the user are valid. If a hostname is not specified, the entry is valid on all hosts. The name of the account that belongs to this netgroup. The NIS domain for the account. Accounts may be imported from other NIS domains into a netgroup. If a group contains multiple users, separate each user with whitespace. Additionally, each field may contain wildcards. See &man.netgroup.5; for details. netgroups Netgroup names longer than 8 characters should not be used. The names are case sensitive and using capital letters for netgroup names is an easy way to distinguish between user, machine and netgroup names. Some non-&os; NIS clients cannot handle netgroups containing more than 15 entries. This limit may be circumvented by creating several sub-netgroups with 15 users or fewer and a real netgroup consisting of the sub-netgroups, as seen in this example: BIGGRP1 (,joe1,domain) (,joe2,domain) (,joe3,domain) [...] BIGGRP2 (,joe16,domain) (,joe17,domain) [...] BIGGRP3 (,joe31,domain) (,joe32,domain) BIGGROUP BIGGRP1 BIGGRP2 BIGGRP3 Repeat this process if more than 225 (15 times 15) users exist within a single netgroup. To activate and distribute the new NIS map: ellington&prompt.root; cd /var/yp ellington&prompt.root; make This will generate the three NIS maps netgroup, netgroup.byhost and netgroup.byuser. Use the map key option of &man.ypcat.1; to check if the new NIS maps are available: ellington&prompt.user; ypcat -k netgroup ellington&prompt.user; ypcat -k netgroup.byhost ellington&prompt.user; ypcat -k netgroup.byuser The output of the first command should resemble the contents of /var/yp/netgroup. The second command only produces output if host-specific netgroups were created. The third command is used to get the list of netgroups for a user. To configure a client, use &man.vipw.8; to specify the name of the netgroup. For example, on the server named war, replace this line: +::::::::: with +@IT_EMP::::::::: This specifies that only the users defined in the netgroup IT_EMP will be imported into this system's password database and only those users are allowed to login to this system. This configuration also applies to the ~ function of the shell and all routines which convert between user names and numerical user IDs. In other words, cd ~user will not work, ls -l will show the numerical ID instead of the username, and find . -user joe -print will fail with the message No such user. To fix this, import all user entries without allowing them to login into the servers. This can be achieved by adding an extra line: - +:::::::::/sbin/nologin + +:::::::::/usr/sbin/nologin This line configures the client to import all entries but to replace the shell in those entries with - /sbin/nologin. + /usr/sbin/nologin. Make sure that extra line is placed after +@IT_EMP:::::::::. Otherwise, all user accounts imported from NIS will have - /sbin/nologin as their login + /usr/sbin/nologin as their login shell and no one will be able to login to the system. To configure the less important servers, replace the old +::::::::: on the servers with these lines: +@IT_EMP::::::::: +@IT_APP::::::::: -+:::::::::/sbin/nologin ++:::::::::/usr/sbin/nologin The corresponding lines for the workstations would be: +@IT_EMP::::::::: +@USERS::::::::: -+:::::::::/sbin/nologin ++:::::::::/usr/sbin/nologin NIS supports the creation of netgroups from other netgroups which can be useful if the policy regarding user access changes. One possibility is the creation of role-based netgroups. For example, one might create a netgroup called BIGSRV to define the login restrictions for the important servers, another netgroup called SMALLSRV for the less important servers, and a third netgroup called USERBOX for the workstations. Each of these netgroups contains the netgroups that are allowed to login onto these machines. The new entries for the NIS netgroup map would look like this: BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS This method of defining login restrictions works reasonably well when it is possible to define groups of machines with identical restrictions. Unfortunately, this is the exception and not the rule. Most of the time, the ability to define login restrictions on a per-machine basis is required. Machine-specific netgroup definitions are another possibility to deal with the policy changes. In this scenario, the /etc/master.passwd of each system contains two lines starting with +. The first line adds a netgroup with the accounts allowed to login onto this machine and the second line adds all other - accounts with /sbin/nologin as shell. It - is recommended to use the ALL-CAPS version of - the hostname as the name of the netgroup: + accounts with /usr/sbin/nologin as shell. + It is recommended to use the ALL-CAPS version + of the hostname as the name of the netgroup: +@BOXNAME::::::::: -+:::::::::/sbin/nologin ++:::::::::/usr/sbin/nologin Once this task is completed on all the machines, there is no longer a need to modify the local versions of /etc/master.passwd ever again. All further changes can be handled by modifying the NIS map. Here is an example of a possible netgroup map for this scenario: # Define groups of users first IT_EMP (,alpha,test-domain) (,beta,test-domain) IT_APP (,charlie,test-domain) (,delta,test-domain) DEPT1 (,echo,test-domain) (,foxtrott,test-domain) DEPT2 (,golf,test-domain) (,hotel,test-domain) DEPT3 (,india,test-domain) (,juliet,test-domain) ITINTERN (,kilo,test-domain) (,lima,test-domain) D_INTERNS (,able,test-domain) (,baker,test-domain) # # Now, define some groups based on roles USERS DEPT1 DEPT2 DEPT3 BIGSRV IT_EMP IT_APP SMALLSRV IT_EMP IT_APP ITINTERN USERBOX IT_EMP ITINTERN USERS # # And a groups for a special tasks # Allow echo and golf to access our anti-virus-machine SECURITY IT_EMP (,echo,test-domain) (,golf,test-domain) # # machine-based netgroups # Our main servers WAR BIGSRV FAMINE BIGSRV # User india needs access to this server POLLUTION BIGSRV (,india,test-domain) # # This one is really important and needs more access restrictions DEATH IT_EMP # # The anti-virus-machine mentioned above ONE SECURITY # # Restrict a machine to a single user TWO (,hotel,test-domain) # [...more groups to follow] It may not always be advisable to use machine-based netgroups. When deploying a couple of dozen or hundreds of systems, role-based netgroups instead of machine-based netgroups may be used to keep the size of the NIS map within reasonable limits.
Password Formats NIS password formats NIS requires that all hosts within an NIS domain use the same format for encrypting passwords. If users have trouble authenticating on an NIS client, it may be due to a differing password format. In a heterogeneous network, the format must be supported by all operating systems, where DES is the lowest common standard. To check which format a server or client is using, look at this section of /etc/login.conf: default:\ :passwd_format=des:\ :copyright=/etc/COPYRIGHT:\ [Further entries elided] In this example, the system is using the DES format. Other possible values are blf for Blowfish and md5 for MD5 encrypted passwords. If the format on a host needs to be edited to match the one being used in the NIS domain, the login capability database must be rebuilt after saving the change: &prompt.root; cap_mkdb /etc/login.conf The format of passwords for existing user accounts will not be updated until each user changes their password after the login capability database is rebuilt.
Lightweight Directory Access Protocol (<acronym>LDAP</acronym>) Tom Rhodes Originally contributed by Rocky Hotas Updates by LDAP The Lightweight Directory Access Protocol (LDAP) is an application layer protocol used to access, modify, and authenticate objects using a distributed directory information service. Think of it as a phone or record book which stores several levels of hierarchical, homogeneous information. It is used in Active Directory and OpenLDAP networks and allows users to access to several levels of internal information utilizing a single account. For example, email authentication, pulling employee contact information, and internal website authentication might all make use of a single user account in the LDAP server's record base. This section provides a quick start guide for configuring an LDAP server on a &os; system. It assumes that the administrator already has a design plan which includes the type of information to store, what that information will be used for, which users should have access to that information, and how to secure this information from unauthorized access. <acronym>LDAP</acronym> Terminology and Structure LDAP uses several terms which should be understood before starting the configuration. All directory entries consist of a group of attributes. Each of these attribute sets contains a unique identifier known as a Distinguished Name (DN) which is normally built from several other attributes such as the common or Relative Distinguished Name (RDN). Similar to how directories have absolute and relative paths, consider a DN as an absolute path and the RDN as the relative path. An example LDAP entry looks like the following. This example searches for the entry for the specified user account (uid), organizational unit (ou), and organization (o): &prompt.user; ldapsearch -xb "uid=trhodes,ou=users,o=example.com" # extended LDIF # # LDAPv3 # base <uid=trhodes,ou=users,o=example.com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # trhodes, users, example.com dn: uid=trhodes,ou=users,o=example.com mail: trhodes@example.com cn: Tom Rhodes uid: trhodes telephoneNumber: (123) 456-7890 # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This example entry shows the values for the dn, mail, cn, uid, and telephoneNumber attributes. The cn attribute is the RDN. More information about LDAP and its terminology can be found at http://www.openldap.org/doc/admin24/intro.html. Configuring an <acronym>LDAP</acronym> Server LDAP Server &os; does not provide a built-in LDAP server. Begin the configuration by installing net/openldap-server package or port: &prompt.root; pkg install openldap-server There is a large set of default options enabled in the package. Review them by running pkg info openldap-server. If they are not sufficient (for example if SQL support is needed), please consider recompiling the port using the appropriate framework. The installation creates the directory /var/db/openldap-data to hold the data. The directory to store the certificates must be created: &prompt.root; mkdir /usr/local/etc/openldap/private The next phase is to configure the Certificate Authority. The following commands must be executed from /usr/local/etc/openldap/private. This is important as the file permissions need to be restrictive and users should not have access to these files. More detailed information about certificates and their parameters can be found in . To create the Certificate Authority, start with this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -x509 -keyout ca.key -out ../ca.crt The entries for the prompts may be generic except for the Common Name. This entry must be different than the system hostname. If this will be a self signed certificate, prefix the hostname with CA for Certificate Authority. The next task is to create a certificate signing request and a private key. Input this command and follow the prompts: &prompt.root; openssl req -days 365 -nodes -new -keyout server.key -out server.csr During the certificate generation process, be sure to correctly set the Common Name attribute. The Certificate Signing Request must be signed with the Certificate Authority in order to be used as a valid certificate: &prompt.root; openssl x509 -req -days 365 -in server.csr -out ../server.crt -CA ../ca.crt -CAkey ca.key -CAcreateserial The final part of the certificate generation process is to generate and sign the client certificates: &prompt.root; openssl req -days 365 -nodes -new -keyout client.key -out client.csr &prompt.root; openssl x509 -req -days 3650 -in client.csr -out ../client.crt -CA ../ca.crt -CAkey ca.key Remember to use the same Common Name attribute when prompted. When finished, ensure that a total of eight (8) new files have been generated through the proceeding commands. The daemon running the OpenLDAP server is slapd. Its configuration is performed through slapd.ldif: the old slapd.conf has been deprecated by OpenLDAP. Configuration examples for slapd.ldif are available and can also be found in /usr/local/etc/openldap/slapd.ldif.sample. Options are documented in slapd-config(5). Each section of slapd.ldif, like all the other LDAP attribute sets, is uniquely identified through a DN. Be sure that no blank lines are left between the dn: statement and the desired end of the section. In the following example, TLS will be used to implement a secure channel. The first section represents the global configuration: # # See slapd-config(5) for details on configuration options. # This file should NOT be world readable. # dn: cn=config objectClass: olcGlobal cn: config # # # Define global ACLs to disable default read access. # olcArgsFile: /var/run/openldap/slapd.args olcPidFile: /var/run/openldap/slapd.pid olcTLSCertificateFile: /usr/local/etc/openldap/server.crt olcTLSCertificateKeyFile: /usr/local/etc/openldap/private/server.key olcTLSCACertificateFile: /usr/local/etc/openldap/ca.crt #olcTLSCipherSuite: HIGH olcTLSProtocolMin: 3.1 olcTLSVerifyClient: never The Certificate Authority, server certificate and server private key files must be specified here. It is recommended to let the clients choose the security cipher and omit option olcTLSCipherSuite (incompatible with TLS clients other than openssl). Option olcTLSProtocolMin lets the server require a minimum security level: it is recommended. While verification is mandatory for the server, it is not for the client: olcTLSVerifyClient: never. The second section is about the backend modules and can be configured as follows: # # Load dynamic backend modules: # dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/local/libexec/openldap olcModuleload: back_mdb.la #olcModuleload: back_bdb.la #olcModuleload: back_hdb.la #olcModuleload: back_ldap.la #olcModuleload: back_passwd.la #olcModuleload: back_shell.la The third section is devoted to load the needed ldif schemas to be used by the databases: they are essential. dn: cn=schema,cn=config objectClass: olcSchemaConfig cn: schema include: file:///usr/local/etc/openldap/schema/core.ldif include: file:///usr/local/etc/openldap/schema/cosine.ldif include: file:///usr/local/etc/openldap/schema/inetorgperson.ldif include: file:///usr/local/etc/openldap/schema/nis.ldif Next, the frontend configuration section: # Frontend settings # dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: to * by * read # # Sample global access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # #olcAccess: to dn.base="" by * read #olcAccess: to dn.base="cn=Subschema" by * read #olcAccess: to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! # olcPasswordHash: {SSHA} # {SSHA} is already the default for olcPasswordHash Another section is devoted to the configuration backend, the only way to later access the OpenLDAP server configuration is as a global super-user. dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: to * by * none olcRootPW: {SSHA}iae+lrQZILpiUdf16Z9KmDmSwT77Dj4U The default administrator username is cn=config. Type slappasswd in a shell, choose a password and use its hash in olcRootPW. If this option is not specified now, before slapd.ldif is imported, no one will be later able to modify the global configuration section. The last section is about the database backend: ####################################################################### # LMDB database definitions ####################################################################### # dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb olcDbMaxSize: 1073741824 olcSuffix: dc=domain,dc=example olcRootDN: cn=mdbadmin,dc=domain,dc=example # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd-config(5) for details. # Use of strong authentication encouraged. olcRootPW: {SSHA}X2wHvIWDk6G76CQyCMS1vDCvtICWgn0+ # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. olcDbDirectory: /var/db/openldap-data # Indices to maintain olcDbIndex: objectClass eq This database hosts the actual contents of the LDAP directory. Types other than mdb are available. Its super-user, not to be confused with the global one, is configured here: a (possibly custom) username in olcRootDN and the password hash in olcRootPW; slappasswd can be used as before. This repository contains four examples of slapd.ldif. To convert an existing slapd.conf into slapd.ldif, refer to this page (please note that this may introduce some unuseful options). When the configuration is completed, slapd.ldif must be placed in an empty directory. It is recommended to create it as: &prompt.root; mkdir /usr/local/etc/openldap/slapd.d/ Import the configuration database: &prompt.root; /usr/local/sbin/slapadd -n0 -F /usr/local/etc/openldap/slapd.d/ -l /usr/local/etc/openldap/slapd.ldif Start the slapd daemon: &prompt.root; /usr/local/libexec/slapd -F /usr/local/etc/openldap/slapd.d/ Option -d can be used for debugging, as specified in slapd(8). To verify that the server is running and working: &prompt.root; ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContexts # # dn: namingContexts: dc=domain,dc=example # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 The server must still be trusted. If that has never been done before, follow these instructions. Install the OpenSSL package or port: &prompt.root; pkg install openssl From the directory where ca.crt is stored (in this example, /usr/local/etc/openldap), run: &prompt.root; c_rehash . Both the CA and the server certificate are now correctly recognized in their respective roles. To verify this, run this command from the server.crt directory: &prompt.root; openssl verify -verbose -CApath . server.crt If slapd was running, restart it. As stated in /usr/local/etc/rc.d/slapd, to properly run slapd at boot the following lines must be added to /etc/rc.conf: lapd_enable="YES" slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://0.0.0.0/"' slapd_sockets="/var/run/openldap/ldapi" slapd_cn_config="YES" slapd does not provide debugging at boot. Check /var/log/debug.log, dmesg -a and /var/log/messages for this purpose. The following example adds the group team and the user john to the domain.example LDAP database, which is still empty. First, create the file domain.ldif: &prompt.root; cat domain.ldif dn: dc=domain,dc=example objectClass: dcObject objectClass: organization o: domain.example dc: domain dn: ou=groups,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: groups dn: ou=users,dc=domain,dc=example objectClass: top objectClass: organizationalunit ou: users dn: cn=team,ou=groups,dc=domain,dc=example objectClass: top objectClass: posixGroup cn: team gidNumber: 10001 dn: uid=john,ou=users,dc=domain,dc=example objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: John McUser uid: john uidNumber: 10001 gidNumber: 10001 homeDirectory: /home/john/ loginShell: /usr/bin/bash userPassword: secret See the OpenLDAP documentation for more details. Use slappasswd to replace the plain text password secret with a hash in userPassword. The path specified as loginShell must exist in all the systems where john is allowed to login. Finally, use the mdb administrator to modify the database: &prompt.root; ldapadd -W -D "cn=mdbadmin,dc=domain,dc=example" -f domain.ldif Modifications to the global configuration section can only be performed by the global super-user. For example, assume that the option olcTLSCipherSuite: HIGH:MEDIUM:SSLv3 was initially specified and must now be deleted. First, create a file that contains the following: &prompt.root; cat global_mod dn: cn=config changetype: modify delete: olcTLSCipherSuite Then, apply the modifications: &prompt.root; ldapmodify -f global_mod -x -D "cn=config" -W When asked, provide the password chosen in the configuration backend section. The username is not required: here, cn=config represents the DN of the database section to be modified. Alternatively, use ldapmodify to delete a single line of the database, ldapdelete to delete a whole entry. If something goes wrong, or if the global super-user cannot access the configuration backend, it is possible to delete and re-write the whole configuration: &prompt.root; rm -rf /usr/local/etc/openldap/slapd.d/ slapd.ldif can then be edited and imported again. Please, follow this procedure only when no other solution is available. This is the configuration of the server only. The same machine can also host an LDAP client, with its own separate configuration. Dynamic Host Configuration Protocol (<acronym>DHCP</acronym>) Dynamic Host Configuration Protocol DHCP Internet Systems Consortium (ISC) The Dynamic Host Configuration Protocol (DHCP) allows a system to connect to a network in order to be assigned the necessary addressing information for communication on that network. &os; includes the OpenBSD version of dhclient which is used by the client to obtain the addressing information. &os; does not install a DHCP server, but several servers are available in the &os; Ports Collection. The DHCP protocol is fully described in RFC 2131. Informational resources are also available at isc.org/downloads/dhcp/. This section describes how to use the built-in DHCP client. It then describes how to install and configure a DHCP server. In &os;, the &man.bpf.4; device is needed by both the DHCP server and DHCP client. This device is included in the GENERIC kernel that is installed with &os;. Users who prefer to create a custom kernel need to keep this device if DHCP is used. It should be noted that bpf also allows privileged users to run network packet sniffers on that system. Configuring a <acronym>DHCP</acronym> Client DHCP client support is included in the &os; installer, making it easy to configure a newly installed system to automatically receive its networking addressing information from an existing DHCP server. Refer to for examples of network configuration. UDP When dhclient is executed on the client machine, it begins broadcasting requests for configuration information. By default, these requests use UDP port 68. The server replies on UDP port 67, giving the client an IP address and other relevant network information such as a subnet mask, default gateway, and DNS server addresses. This information is in the form of a DHCP lease and is valid for a configurable time. This allows stale IP addresses for clients no longer connected to the network to automatically be reused. DHCP clients can obtain a great deal of information from the server. An exhaustive list may be found in &man.dhcp-options.5;. By default, when a &os; system boots, its DHCP client runs in the background, or asynchronously. Other startup scripts continue to run while the DHCP process completes, which speeds up system startup. Background DHCP works well when the DHCP server responds quickly to the client's requests. However, DHCP may take a long time to complete on some systems. If network services attempt to run before DHCP has assigned the network addressing information, they will fail. Using DHCP in synchronous mode prevents this problem as it pauses startup until the DHCP configuration has completed. This line in /etc/rc.conf is used to configure background or asynchronous mode: ifconfig_fxp0="DHCP" This line may already exist if the system was configured to use DHCP during installation. Replace the fxp0 shown in these examples with the name of the interface to be dynamically configured, as described in . To instead configure the system to use synchronous mode, and to pause during startup while DHCP completes, use SYNCDHCP: ifconfig_fxp0="SYNCDHCP" Additional client options are available. Search for dhclient in &man.rc.conf.5; for details. DHCP configuration files The DHCP client uses the following files: /etc/dhclient.conf The configuration file used by dhclient. Typically, this file contains only comments as the defaults are suitable for most clients. This configuration file is described in &man.dhclient.conf.5;. /sbin/dhclient More information about the command itself can be found in &man.dhclient.8;. /sbin/dhclient-script The &os;-specific DHCP client configuration script. It is described in &man.dhclient-script.8;, but should not need any user modification to function properly. /var/db/dhclient.leases.interface The DHCP client keeps a database of valid leases in this file, which is written as a log and is described in &man.dhclient.leases.5;. Installing and Configuring a <acronym>DHCP</acronym> Server This section demonstrates how to configure a &os; system to act as a DHCP server using the Internet Systems Consortium (ISC) implementation of the DHCP server. This implementation and its documentation can be installed using the net/isc-dhcp43-server package or port. DHCP server DHCP installation The installation of net/isc-dhcp43-server installs a sample configuration file. Copy /usr/local/etc/dhcpd.conf.example to /usr/local/etc/dhcpd.conf and make any edits to this new file. DHCP dhcpd.conf The configuration file is comprised of declarations for subnets and hosts which define the information that is provided to DHCP clients. For example, these lines configure the following: option domain-name "example.org"; option domain-name-servers ns1.example.org; option subnet-mask 255.255.255.0; default-lease-time 600; max-lease-time 72400; ddns-update-style none; subnet 10.254.239.0 netmask 255.255.255.224 { range 10.254.239.10 10.254.239.20; option routers rtr-239-0-1.example.org, rtr-239-0-2.example.org; } host fantasia { hardware ethernet 08:00:07:26:c0:a5; fixed-address fantasia.fugue.com; } This option specifies the default search domain that will be provided to clients. Refer to &man.resolv.conf.5; for more information. This option specifies a comma separated list of DNS servers that the client should use. They can be listed by their Fully Qualified Domain Names (FQDN), as seen in the example, or by their IP addresses. The subnet mask that will be provided to clients. The default lease expiry time in seconds. A client can be configured to override this value. The maximum allowed length of time, in seconds, for a lease. Should a client request a longer lease, a lease will still be issued, but it will only be valid for max-lease-time. The default of disables dynamic DNS updates. Changing this to configures the DHCP server to update a DNS server whenever it hands out a lease so that the DNS server knows which IP addresses are associated with which computers in the network. Do not change the default setting unless the DNS server has been configured to support dynamic DNS. This line creates a pool of available IP addresses which are reserved for allocation to DHCP clients. The range of addresses must be valid for the network or subnet specified in the previous line. Declares the default gateway that is valid for the network or subnet specified before the opening { bracket. Specifies the hardware MAC address of a client so that the DHCP server can recognize the client when it makes a request. Specifies that this host should always be given the same IP address. Using the hostname is correct, since the DHCP server will resolve the hostname before returning the lease information. This configuration file supports many more options. Refer to dhcpd.conf(5), installed with the server, for details and examples. Once the configuration of dhcpd.conf is complete, enable the DHCP server in /etc/rc.conf: dhcpd_enable="YES" dhcpd_ifaces="dc0" Replace the dc0 with the interface (or interfaces, separated by whitespace) that the DHCP server should listen on for DHCP client requests. Start the server by issuing the following command: &prompt.root; service isc-dhcpd start Any future changes to the configuration of the server will require the dhcpd service to be stopped and then started using &man.service.8;. The DHCP server uses the following files. Note that the manual pages are installed with the server software. DHCP configuration files /usr/local/sbin/dhcpd More information about the dhcpd server can be found in dhcpd(8). /usr/local/etc/dhcpd.conf The server configuration file needs to contain all the information that should be provided to clients, along with information regarding the operation of the server. This configuration file is described in dhcpd.conf(5). /var/db/dhcpd.leases The DHCP server keeps a database of leases it has issued in this file, which is written as a log. Refer to dhcpd.leases(5), which gives a slightly longer description. /usr/local/sbin/dhcrelay This daemon is used in advanced environments where one DHCP server forwards a request from a client to another DHCP server on a separate network. If this functionality is required, install the net/isc-dhcp43-relay package or port. The installation includes dhcrelay(8) which provides more detail. Domain Name System (<acronym>DNS</acronym>) DNS Domain Name System (DNS) is the protocol through which domain names are mapped to IP addresses, and vice versa. DNS is coordinated across the Internet through a somewhat complex system of authoritative root, Top Level Domain (TLD), and other smaller-scale name servers, which host and cache individual domain information. It is not necessary to run a name server to perform DNS lookups on a system. resolver reverse DNS root zone The following table describes some of the terms associated with DNS: <acronym>DNS</acronym> Terminology Term Definition Forward DNS Mapping of hostnames to IP addresses. Origin Refers to the domain covered in a particular zone file. Resolver A system process through which a machine queries a name server for zone information. Reverse DNS Mapping of IP addresses to hostnames. Root zone The beginning of the Internet zone hierarchy. All zones fall under the root zone, similar to how all files in a file system fall under the root directory. Zone An individual domain, subdomain, or portion of the DNS administered by the same authority.
zones examples Examples of zones: . is how the root zone is usually referred to in documentation. org. is a Top Level Domain (TLD) under the root zone. example.org. is a zone under the org. TLD. 1.168.192.in-addr.arpa is a zone referencing all IP addresses which fall under the 192.168.1.* IP address space. As one can see, the more specific part of a hostname appears to its left. For example, example.org. is more specific than org., as org. is more specific than the root zone. The layout of each part of a hostname is much like a file system: the /dev directory falls within the root, and so on. Reasons to Run a Name Server Name servers generally come in two forms: authoritative name servers, and caching (also known as resolving) name servers. An authoritative name server is needed when: One wants to serve DNS information to the world, replying authoritatively to queries. A domain, such as example.org, is registered and IP addresses need to be assigned to hostnames under it. An IP address block requires reverse DNS entries (IP to hostname). A backup or second name server, called a slave, will reply to queries. A caching name server is needed when: A local DNS server may cache and respond more quickly than querying an outside name server. When one queries for www.FreeBSD.org, the resolver usually queries the uplink ISP's name server, and retrieves the reply. With a local, caching DNS server, the query only has to be made once to the outside world by the caching DNS server. Additional queries will not have to go outside the local network, since the information is cached locally. <acronym>DNS</acronym> Server Configuration Unbound is provided in the &os; base system. By default, it will provide DNS resolution to the local machine only. While the base system package can be configured to provide resolution services beyond the local machine, it is recommended that such requirements be addressed by installing Unbound from the &os; Ports Collection. To enable Unbound, add the following to /etc/rc.conf: local_unbound_enable="YES" Any existing nameservers in /etc/resolv.conf will be configured as forwarders in the new Unbound configuration. If any of the listed nameservers do not support DNSSEC, local DNS resolution will fail. Be sure to test each nameserver and remove any that fail the test. The following command will show the trust tree or a failure for a nameserver running on 192.168.1.1: &prompt.user; drill -S FreeBSD.org @192.168.1.1 Once each nameserver is confirmed to support DNSSEC, start Unbound: &prompt.root; service local_unbound onestart This will take care of updating /etc/resolv.conf so that queries for DNSSEC secured domains will now work. For example, run the following to validate the FreeBSD.org DNSSEC trust tree: &prompt.user; drill -S FreeBSD.org ;; Number of trusted keys: 1 ;; Chasing: freebsd.org. A DNSSEC Trust tree: freebsd.org. (A) |---freebsd.org. (DNSKEY keytag: 36786 alg: 8 flags: 256) |---freebsd.org. (DNSKEY keytag: 32659 alg: 8 flags: 257) |---freebsd.org. (DS keytag: 32659 digest type: 2) |---org. (DNSKEY keytag: 49587 alg: 7 flags: 256) |---org. (DNSKEY keytag: 9795 alg: 7 flags: 257) |---org. (DNSKEY keytag: 21366 alg: 7 flags: 257) |---org. (DS keytag: 21366 digest type: 1) | |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) | |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) |---org. (DS keytag: 21366 digest type: 2) |---. (DNSKEY keytag: 40926 alg: 8 flags: 256) |---. (DNSKEY keytag: 19036 alg: 8 flags: 257) ;; Chase successful
Apache HTTP Server Murray Stokely Contributed by web servers setting up Apache The open source Apache HTTP Server is the most widely used web server. &os; does not install this web server by default, but it can be installed from the www/apache24 package or port. This section summarizes how to configure and start version 2.x of the Apache HTTP Server on &os;. For more detailed information about Apache 2.X and its configuration directives, refer to httpd.apache.org. Configuring and Starting Apache Apache configuration file In &os;, the main Apache HTTP Server configuration file is installed as /usr/local/etc/apache2x/httpd.conf, where x represents the version number. This ASCII text file begins comment lines with a #. The most frequently modified directives are: ServerRoot "/usr/local" Specifies the default directory hierarchy for the Apache installation. Binaries are stored in the bin and sbin subdirectories of the server root and configuration files are stored in the etc/apache2x subdirectory. ServerAdmin you@example.com Change this to the email address to receive problems with the server. This address also appears on some server-generated pages, such as error documents. ServerName www.example.com:80 Allows an administrator to set a hostname which is sent back to clients for the server. For example, www can be used instead of the actual hostname. If the system does not have a registered DNS name, enter its IP address instead. If the server will listen on an alternate report, change 80 to the alternate port number. DocumentRoot "/usr/local/www/apache2x/data" The directory where documents will be served from. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. It is always a good idea to make a backup copy of the default Apache configuration file before making changes. When the configuration of Apache is complete, save the file and verify the configuration using apachectl. Running apachectl configtest should return Syntax OK. Apache starting or stopping To launch Apache at system startup, add the following line to /etc/rc.conf: apache24_enable="YES" If Apache should be started with non-default options, the following line may be added to /etc/rc.conf to specify the needed flags: apache24_flags="" If apachectl does not report configuration errors, start httpd now: &prompt.root; service apache24 start The httpd service can be tested by entering http://localhost in a web browser, replacing localhost with the fully-qualified domain name of the machine running httpd. The default web page that is displayed is /usr/local/www/apache24/data/index.html. The Apache configuration can be tested for errors after making subsequent configuration changes while httpd is running using the following command: &prompt.root; service apache24 configtest It is important to note that configtest is not an &man.rc.8; standard, and should not be expected to work for all startup scripts. Virtual Hosting Virtual hosting allows multiple websites to run on one Apache server. The virtual hosts can be IP-based or name-based. IP-based virtual hosting uses a different IP address for each website. Name-based virtual hosting uses the clients HTTP/1.1 headers to figure out the hostname, which allows the websites to share the same IP address. To setup Apache to use name-based virtual hosting, add a VirtualHost block for each website. For example, for the webserver named www.domain.tld with a virtual domain of www.someotherdomain.tld, add the following entries to httpd.conf: <VirtualHost *> ServerName www.domain.tld DocumentRoot /www/domain.tld </VirtualHost> <VirtualHost *> ServerName www.someotherdomain.tld DocumentRoot /www/someotherdomain.tld </VirtualHost> For each virtual host, replace the values for ServerName and DocumentRoot with the values to be used. For more information about setting up virtual hosts, consult the official Apache documentation at: http://httpd.apache.org/docs/vhosts/. Apache Modules Apache modules Apache uses modules to augment the functionality provided by the basic server. Refer to http://httpd.apache.org/docs/current/mod/ for a complete listing of and the configuration details for the available modules. In &os;, some modules can be compiled with the www/apache24 port. Type make config within /usr/ports/www/apache24 to see which modules are available and which are enabled by default. If the module is not compiled with the port, the &os; Ports Collection provides an easy way to install many modules. This section describes three of the most commonly used modules. <filename>mod_ssl</filename> web servers secure SSL cryptography The mod_ssl module uses the OpenSSL library to provide strong cryptography via the Secure Sockets Layer (SSLv3) and Transport Layer Security (TLSv1) protocols. This module provides everything necessary to request a signed certificate from a trusted certificate signing authority to run a secure web server on &os;. In &os;, mod_ssl module is enabled by default in both the package and the port. The available configuration directives are explained at http://httpd.apache.org/docs/current/mod/mod_ssl.html. <filename>mod_perl</filename> mod_perl Perl The mod_perl module makes it possible to write Apache modules in Perl. In addition, the persistent interpreter embedded in the server avoids the overhead of starting an external interpreter and the penalty of Perl start-up time. The mod_perl can be installed using the www/mod_perl2 package or port. Documentation for using this module can be found at http://perl.apache.org/docs/2.0/index.html. <filename>mod_php</filename> Tom Rhodes Written by mod_php PHP PHP: Hypertext Preprocessor (PHP) is a general-purpose scripting language that is especially suited for web development. Capable of being embedded into HTML, its syntax draws upon C, &java;, and Perl with the intention of allowing web developers to write dynamically generated webpages quickly. To gain support for PHP5 for the Apache web server, install the www/mod_php56 package or port. This will install and configure the modules required to support dynamic PHP applications. The installation will automatically add this line to /usr/local/etc/apache24/httpd.conf: LoadModule php5_module libexec/apache24/libphp5.so Then, perform a graceful restart to load the PHP module: &prompt.root; apachectl graceful The PHP support provided by www/mod_php56 is limited. Additional support can be installed using the lang/php56-extensions port which provides a menu driven interface to the available PHP extensions. Alternatively, individual extensions can be installed using the appropriate port. For instance, to add PHP support for the MySQL database server, install databases/php56-mysql. After installing an extension, the Apache server must be reloaded to pick up the new configuration changes: &prompt.root; apachectl graceful Dynamic Websites web servers dynamic In addition to mod_perl and mod_php, other languages are available for creating dynamic web content. These include Django and Ruby on Rails. Django Python Django Django is a BSD-licensed framework designed to allow developers to write high performance, elegant web applications quickly. It provides an object-relational mapper so that data types are developed as Python objects. A rich dynamic database-access API is provided for those objects without the developer ever having to write SQL. It also provides an extensible template system so that the logic of the application is separated from the HTML presentation. Django depends on mod_python, and an SQL database engine. In &os;, the www/py-django port automatically installs mod_python and supports the PostgreSQL, MySQL, or SQLite databases, with the default being SQLite. To change the database engine, type make config within /usr/ports/www/py-django, then install the port. Once Django is installed, the application will need a project directory along with the Apache configuration in order to use the embedded Python interpreter. This interpreter is used to call the application for specific URLs on the site. To configure Apache to pass requests for certain URLs to the web application, add the following to httpd.conf, specifying the full path to the project directory: <Location "/"> SetHandler python-program PythonPath "['/dir/to/the/django/packages/'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonAutoReload On PythonDebug On </Location> Refer to https://docs.djangoproject.com for more information on how to use Django. Ruby on Rails Ruby on Rails Ruby on Rails is another open source web framework that provides a full development stack. It is optimized to make web developers more productive and capable of writing powerful applications quickly. On &os;, it can be installed using the www/rubygem-rails package or port. Refer to http://guides.rubyonrails.org for more information on how to use Ruby on Rails. File Transfer Protocol (<acronym>FTP</acronym>) FTP servers The File Transfer Protocol (FTP) provides users with a simple way to transfer files to and from an FTP server. &os; includes FTP server software, ftpd, in the base system. &os; provides several configuration files for controlling access to the FTP server. This section summarizes these files. Refer to &man.ftpd.8; for more details about the built-in FTP server. Configuration The most important configuration step is deciding which accounts will be allowed access to the FTP server. A &os; system has a number of system accounts which should not be allowed FTP access. The list of users disallowed any FTP access can be found in /etc/ftpusers. By default, it includes system accounts. Additional users that should not be allowed access to FTP can be added. In some cases it may be desirable to restrict the access of some users without preventing them completely from using FTP. This can be accomplished be creating /etc/ftpchroot as described in &man.ftpchroot.5;. This file lists users and groups subject to FTP access restrictions. FTP anonymous To enable anonymous FTP access to the server, create a user named ftp on the &os; system. Users will then be able to log on to the FTP server with a username of ftp or anonymous. When prompted for the password, any input will be accepted, but by convention, an email address should be used as the password. The FTP server will call &man.chroot.2; when an anonymous user logs in, to restrict access to only the home directory of the ftp user. There are two text files that can be created to specify welcome messages to be displayed to FTP clients. The contents of /etc/ftpwelcome will be displayed to users before they reach the login prompt. After a successful login, the contents of /etc/ftpmotd will be displayed. Note that the path to this file is relative to the login environment, so the contents of ~ftp/etc/ftpmotd would be displayed for anonymous users. Once the FTP server has been configured, set the appropriate variable in /etc/rc.conf to start the service during boot: ftpd_enable="YES" To start the service now: &prompt.root; service ftpd start Test the connection to the FTP server by typing: &prompt.user; ftp localhost syslog log files FTP The ftpd daemon uses &man.syslog.3; to log messages. By default, the system log daemon will write messages related to FTP in /var/log/xferlog. The location of the FTP log can be modified by changing the following line in /etc/syslog.conf: ftp.info /var/log/xferlog FTP anonymous Be aware of the potential problems involved with running an anonymous FTP server. In particular, think twice about allowing anonymous users to upload files. It may turn out that the FTP site becomes a forum for the trade of unlicensed commercial software or worse. If anonymous FTP uploads are required, then verify the permissions so that these files cannot be read by other anonymous users until they have been reviewed by an administrator. File and Print Services for µsoft.windows; Clients (Samba) Samba server Microsoft Windows file server Windows clients print server Windows clients Samba is a popular open source software package that provides file and print services using the SMB/CIFS protocol. This protocol is built into µsoft.windows; systems. It can be added to non-µsoft.windows; systems by installing the Samba client libraries. The protocol allows clients to access shared data and printers. These shares can be mapped as a local disk drive and shared printers can be used as if they were local printers. On &os;, the Samba client libraries can be installed using the net/samba48 port or package. The client provides the ability for a &os; system to access SMB/CIFS shares in a µsoft.windows; network. A &os; system can also be configured to act as a Samba server by installing the same net/samba48 port or package. This allows the administrator to create SMB/CIFS shares on the &os; system which can be accessed by clients running µsoft.windows; or the Samba client libraries. Server Configuration Samba is configured in /usr/local/etc/smb4.conf. This file must be created before Samba can be used. A simple smb4.conf to share directories and printers with &windows; clients in a workgroup is shown here. For more complex setups involving LDAP or Active Directory, it is easier to use &man.samba-tool.8; to create the initial smb4.conf. [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = ExampleMachine wins support = Yes security = user passdb backend = tdbsam # Example: share /usr/src accessible only to 'developer' user [src] path = /usr/src valid users = developer writable = yes browsable = yes read only = no guest ok = no public = no create mask = 0666 directory mask = 0755 Global Settings Settings that describe the network are added in /usr/local/etc/smb4.conf: workgroup The name of the workgroup to be served. netbios name The NetBIOS name by which a Samba server is known. By default, it is the same as the first component of the host's DNS name. server string The string that will be displayed in the output of net view and some other networking tools that seek to display descriptive text about the server. wins support Whether Samba will act as a WINS server. Do not enable support for WINS on more than one server on the network. Security Settings The most important settings in /usr/local/etc/smb4.conf are the security model and the backend password format. These directives control the options: security The most common settings are security = share and security = user. If the clients use usernames that are the same as their usernames on the &os; machine, user level security should be used. This is the default security policy and it requires clients to first log on before they can access shared resources. In share level security, clients do not need to log onto the server with a valid username and password before attempting to connect to a shared resource. This was the default security model for older versions of Samba. passdb backend NIS+ LDAP SQL database Samba has several different backend authentication models. Clients may be authenticated with LDAP, NIS+, an SQL database, or a modified password file. The recommended authentication method, tdbsam, is ideal for simple networks and is covered here. For larger or more complex networks, ldapsam is recommended. smbpasswd was the former default and is now obsolete. <application>Samba</application> Users &os; user accounts must be mapped to the SambaSAMAccount database for &windows; clients to access the share. Map existing &os; user accounts using &man.pdbedit.8;: &prompt.root; pdbedit -a username This section has only mentioned the most commonly used settings. Refer to the Official Samba HOWTO for additional information about the available configuration options. Starting <application>Samba</application> To enable Samba at boot time, add the following line to /etc/rc.conf: samba_server_enable="YES" To start Samba now: &prompt.root; service samba_server start Performing sanity check on Samba configuration: OK Starting nmbd. Starting smbd. Samba consists of three separate daemons. Both the nmbd and smbd daemons are started by samba_enable. If winbind name resolution is also required, set: winbindd_enable="YES" Samba can be stopped at any time by typing: &prompt.root; service samba_server stop Samba is a complex software suite with functionality that allows broad integration with µsoft.windows; networks. For more information about functionality beyond the basic configuration described here, refer to http://www.samba.org. Clock Synchronization with NTP NTP ntpd Over time, a computer's clock is prone to drift. This is problematic as many network services require the computers on a network to share the same accurate time. Accurate time is also needed to ensure that file timestamps stay consistent. The Network Time Protocol (NTP) is one way to provide clock accuracy in a network. &os; includes &man.ntpd.8; which can be configured to query other NTP servers in order to synchronize the clock on that machine or to provide time services to other computers in the network. The servers which are queried can be local to the network or provided by an ISP. In addition, an online list of publicly accessible NTP servers is available. When choosing a public NTP server, select one that is geographically close and review its usage policy. Choosing several NTP servers is recommended in case one of the servers becomes unreachable or its clock proves unreliable. As ntpd receives responses, it favors reliable servers over the less reliable ones. This section describes how to configure ntpd on &os;. Further documentation can be found in /usr/share/doc/ntp/ in HTML format. <acronym>NTP</acronym> Configuration NTP ntp.conf On &os;, the built-in ntpd can be used to synchronize a system's clock. To enable ntpd at boot time, add ntpd_enable="YES" to /etc/rc.conf. Additional variables can be specified in /etc/rc.conf. Refer to &man.rc.conf.5; and &man.ntpd.8; for details. This application reads /etc/ntp.conf to determine which NTP servers to query. Here is a simple example of an /etc/ntp.conf: Sample <filename>/etc/ntp.conf</filename> server ntplocal.example.com prefer server timeserver.example.org server ntp2a.example.net driftfile /var/db/ntp.drift The format of this file is described in &man.ntp.conf.5;. The server option specifies which servers to query, with one server listed on each line. If a server entry includes prefer, that server is preferred over other servers. A response from a preferred server will be discarded if it differs significantly from other servers' responses; otherwise it will be used. The prefer argument should only be used for NTP servers that are known to be highly accurate, such as those with special time monitoring hardware. The driftfile entry specifies which file is used to store the system clock's frequency offset. ntpd uses this to automatically compensate for the clock's natural drift, allowing it to maintain a reasonably correct setting even if it is cut off from all external time sources for a period of time. This file also stores information about previous responses from NTP servers. Since this file contains internal information for NTP, it should not be modified. By default, an NTP server is accessible to any network host. The restrict option in /etc/ntp.conf can be used to control which systems can access the server. For example, to deny all machines from accessing the NTP server, add the following line to /etc/ntp.conf: restrict default ignore This will also prevent access from other NTP servers. If there is a need to synchronize with an external NTP server, allow only that specific server. Refer to &man.ntp.conf.5; for more information. To allow machines within the network to synchronize their clocks with the server, but ensure they are not allowed to configure the server or be used as peers to synchronize against, instead use: restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap where 192.168.1.0 is the local network address and 255.255.255.0 is the network's subnet mask. Multiple restrict entries are supported. For more details, refer to the Access Control Support subsection of &man.ntp.conf.5;. Once ntpd_enable="YES" has been added to /etc/rc.conf, ntpd can be started now without rebooting the system by typing: &prompt.root; service ntpd start Using <acronym>NTP</acronym> with a <acronym>PPP</acronym> Connection ntpd does not need a permanent connection to the Internet to function properly. However, if a PPP connection is configured to dial out on demand, NTP traffic should be prevented from triggering a dial out or keeping the connection alive. This can be configured with filter directives in /etc/ppp/ppp.conf. For example: set filter dial 0 deny udp src eq 123 # Prevent NTP traffic from initiating dial out set filter dial 1 permit 0 0 set filter alive 0 deny udp src eq 123 # Prevent incoming NTP traffic from keeping the connection open set filter alive 1 deny udp dst eq 123 # Prevent outgoing NTP traffic from keeping the connection open set filter alive 2 permit 0/0 0/0 For more details, refer to the PACKET FILTERING section in &man.ppp.8; and the examples in /usr/share/examples/ppp/. Some Internet access providers block low-numbered ports, preventing NTP from functioning since replies never reach the machine. <acronym>iSCSI</acronym> Initiator and Target Configuration iSCSI is a way to share storage over a network. Unlike NFS, which works at the file system level, iSCSI works at the block device level. In iSCSI terminology, the system that shares the storage is known as the target. The storage can be a physical disk, or an area representing multiple disks or a portion of a physical disk. For example, if the disk(s) are formatted with ZFS, a zvol can be created to use as the iSCSI storage. The clients which access the iSCSI storage are called initiators. To initiators, the storage available through iSCSI appears as a raw, unformatted disk known as a LUN. Device nodes for the disk appear in /dev/ and the device must be separately formatted and mounted. &os; provides a native, kernel-based iSCSI target and initiator. This section describes how to configure a &os; system as a target or an initiator. Configuring an <acronym>iSCSI</acronym> Target To configure an iSCSI target, create the /etc/ctl.conf configuration file, add a line to /etc/rc.conf to make sure the &man.ctld.8; daemon is automatically started at boot, and then start the daemon. The following is an example of a simple /etc/ctl.conf configuration file. Refer to &man.ctl.conf.5; for a more complete description of this file's available options. portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group no-authentication portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The first entry defines the pg0 portal group. Portal groups define which network addresses the &man.ctld.8; daemon will listen on. The discovery-auth-group no-authentication entry indicates that any initiator is allowed to perform iSCSI target discovery without authentication. Lines three and four configure &man.ctld.8; to listen on all IPv4 (listen 0.0.0.0) and IPv6 (listen [::]) addresses on the default port of 3260. It is not necessary to define a portal group as there is a built-in portal group called default. In this case, the difference between default and pg0 is that with default, target discovery is always denied, while with pg0, it is always allowed. The second entry defines a single target. Target has two possible meanings: a machine serving iSCSI or a named group of LUNs. This example uses the latter meaning, where iqn.2012-06.com.example:target0 is the target name. This target name is suitable for testing purposes. For actual use, change com.example to the real domain name, reversed. The 2012-06 represents the year and month of acquiring control of that domain name, and target0 can be any value. Any number of targets can be defined in this configuration file. The auth-group no-authentication line allows all initiators to connect to the specified target and portal-group pg0 makes the target reachable through the pg0 portal group. The next section defines the LUN. To the initiator, each LUN will be visible as a separate disk device. Multiple LUNs can be defined for each target. Each LUN is identified by a number, where LUN 0 is mandatory. The path /data/target0-0 line defines the full path to a file or zvol backing the LUN. That path must exist before starting &man.ctld.8;. The second line is optional and specifies the size of the LUN. Next, to make sure the &man.ctld.8; daemon is started at boot, add this line to /etc/rc.conf: ctld_enable="YES" To start &man.ctld.8; now, run this command: &prompt.root; service ctld start As the &man.ctld.8; daemon is started, it reads /etc/ctl.conf. If this file is edited after the daemon starts, use this command so that the changes take effect immediately: &prompt.root; service ctld reload Authentication The previous example is inherently insecure as it uses no authentication, granting anyone full access to all targets. To require a username and password to access targets, modify the configuration as follows: auth-group ag0 { chap username1 secretsecret chap username2 anothersecret } portal-group pg0 { discovery-auth-group no-authentication listen 0.0.0.0 listen [::] } target iqn.2012-06.com.example:target0 { auth-group ag0 portal-group pg0 lun 0 { path /data/target0-0 size 4G } } The auth-group section defines username and password pairs. An initiator trying to connect to iqn.2012-06.com.example:target0 must first specify a defined username and secret. However, target discovery is still permitted without authentication. To require target discovery authentication, set discovery-auth-group to a defined auth-group name instead of no-authentication. It is common to define a single exported target for every initiator. As a shorthand for the syntax above, the username and password can be specified directly in the target entry: target iqn.2012-06.com.example:target0 { portal-group pg0 chap username1 secretsecret lun 0 { path /data/target0-0 size 4G } } Configuring an <acronym>iSCSI</acronym> Initiator The iSCSI initiator described in this section is supported starting with &os; 10.0-RELEASE. To use the iSCSI initiator available in older versions, refer to &man.iscontrol.8;. The iSCSI initiator requires that the &man.iscsid.8; daemon is running. This daemon does not use a configuration file. To start it automatically at boot, add this line to /etc/rc.conf: iscsid_enable="YES" To start &man.iscsid.8; now, run this command: &prompt.root; service iscsid start Connecting to a target can be done with or without an /etc/iscsi.conf configuration file. This section demonstrates both types of connections. Connecting to a Target Without a Configuration File To connect an initiator to a single target, specify the IP address of the portal and the name of the target: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 To verify if the connection succeeded, run iscsictl without any arguments. The output should look similar to this: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Connected: da0 In this example, the iSCSI session was successfully established, with /dev/da0 representing the attached LUN. If the iqn.2012-06.com.example:target0 target exports more than one LUN, multiple device nodes will be shown in that section of the output: Connected: da0 da1 da2. Any errors will be reported in the output, as well as the system logs. For example, this message usually means that the &man.iscsid.8; daemon is not running: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Waiting for iscsid(8) The following message suggests a networking problem, such as a wrong IP address or port: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.11 Connection refused This message means that the specified target name is wrong: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Not found This message means that the target requires authentication: Target name Target portal State iqn.2012-06.com.example:target0 10.10.10.10 Authentication failed To specify a CHAP username and secret, use this syntax: &prompt.root; iscsictl -A -p 10.10.10.10 -t iqn.2012-06.com.example:target0 -u user -s secretsecret Connecting to a Target with a Configuration File To connect using a configuration file, create /etc/iscsi.conf with contents like this: t0 { TargetAddress = 10.10.10.10 TargetName = iqn.2012-06.com.example:target0 AuthMethod = CHAP chapIName = user chapSecret = secretsecret } The t0 specifies a nickname for the configuration file section. It will be used by the initiator to specify which configuration to use. The other lines specify the parameters to use during connection. The TargetAddress and TargetName are mandatory, whereas the other options are optional. In this example, the CHAP username and secret are shown. To connect to the defined target, specify the nickname: &prompt.root; iscsictl -An t0 Alternately, to connect to all targets defined in the configuration file, use: &prompt.root; iscsictl -Aa To make the initiator automatically connect to all targets in /etc/iscsi.conf, add the following to /etc/rc.conf: iscsictl_enable="YES" iscsictl_flags="-Aa"
Index: head/en_US.ISO8859-1/books/handbook/security/chapter.xml =================================================================== --- head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 53261) +++ head/en_US.ISO8859-1/books/handbook/security/chapter.xml (revision 53262) @@ -1,4132 +1,4132 @@ Security Tom Rhodes Rewritten by security Synopsis Security, whether physical or virtual, is a topic so broad that an entire industry has evolved around it. Hundreds of standard practices have been authored about how to secure systems and networks, and as a user of &os;, understanding how to protect against attacks and intruders is a must. In this chapter, several fundamentals and techniques will be discussed. The &os; system comes with multiple layers of security, and many more third party utilities may be added to enhance security. After reading this chapter, you will know: Basic &os; system security concepts. The various crypt mechanisms available in &os;. How to set up one-time password authentication. How to configure TCP Wrapper for use with &man.inetd.8;. How to set up Kerberos on &os;. How to configure IPsec and create a VPN. How to configure and use OpenSSH on &os;. How to use file system ACLs. How to use pkg to audit third party software packages installed from the Ports Collection. How to utilize &os; security advisories. What Process Accounting is and how to enable it on &os;. How to control user resources using login classes or the resource limits database. Before reading this chapter, you should: Understand basic &os; and Internet concepts. Additional security topics are covered elsewhere in this Handbook. For example, Mandatory Access Control is discussed in and Internet firewalls are discussed in . Introduction Security is everyone's responsibility. A weak entry point in any system could allow intruders to gain access to critical information and cause havoc on an entire network. One of the core principles of information security is the CIA triad, which stands for the Confidentiality, Integrity, and Availability of information systems. The CIA triad is a bedrock concept of computer security as customers and users expect their data to be protected. For example, a customer expects that their credit card information is securely stored (confidentiality), that their orders are not changed behind the scenes (integrity), and that they have access to their order information at all times (availablility). To provide CIA, security professionals apply a defense in depth strategy. The idea of defense in depth is to add several layers of security to prevent one single layer failing and the entire security system collapsing. For example, a system administrator cannot simply turn on a firewall and consider the network or system secure. One must also audit accounts, check the integrity of binaries, and ensure malicious tools are not installed. To implement an effective security strategy, one must understand threats and how to defend against them. What is a threat as it pertains to computer security? Threats are not limited to remote attackers who attempt to access a system without permission from a remote location. Threats also include employees, malicious software, unauthorized network devices, natural disasters, security vulnerabilities, and even competing corporations. Systems and networks can be accessed without permission, sometimes by accident, or by remote attackers, and in some cases, via corporate espionage or former employees. As a user, it is important to prepare for and admit when a mistake has led to a security breach and report possible issues to the security team. As an administrator, it is important to know of the threats and be prepared to mitigate them. When applying security to systems, it is recommended to start by securing the basic accounts and system configuration, and then to secure the network layer so that it adheres to the system policy and the organization's security procedures. Many organizations already have a security policy that covers the configuration of technology devices. The policy should include the security configuration of workstations, desktops, mobile devices, phones, production servers, and development servers. In many cases, standard operating procedures (SOPs) already exist. When in doubt, ask the security team. The rest of this introduction describes how some of these basic security configurations are performed on a &os; system. The rest of this chapter describes some specific tools which can be used when implementing a security policy on a &os; system. Preventing Logins In securing a system, a good starting point is an audit of accounts. Ensure that root has a strong password and that this password is not shared. Disable any accounts that do not need login access. To deny login access to accounts, two methods exist. The first is to lock the account. This example locks the toor account: &prompt.root; pw lock toor The second method is to prevent login access by changing - the shell to /sbin/nologin. Only the + the shell to /usr/sbin/nologin. Only the superuser can change the shell for other users: &prompt.root; chsh -s /usr/sbin/nologin toor The /usr/sbin/nologin shell prevents the system from assigning a shell to the user when they attempt to login. Permitted Account Escalation In some cases, system administration needs to be shared with other users. &os; has two methods to handle this. The first one, which is not recommended, is a shared root password used by members of the wheel group. With this method, a user types su and enters the password for wheel whenever superuser access is needed. The user should then type exit to leave privileged access after finishing the commands that required administrative access. To add a user to this group, edit /etc/group and add the user to the end of the wheel entry. The user must be separated by a comma character with no space. The second, and recommended, method to permit privilege escalation is to install the security/sudo package or port. This software provides additional auditing, more fine-grained user control, and can be configured to lock users into running only the specified privileged commands. After installation, use visudo to edit /usr/local/etc/sudoers. This example creates a new webadmin group, adds the trhodes account to that group, and configures that group access to restart apache24: &prompt.root; pw groupadd webadmin -M trhodes -g 6000 &prompt.root; visudo %webadmin ALL=(ALL) /usr/sbin/service apache24 * Password Hashes Passwords are a necessary evil of technology. When they must be used, they should be complex and a powerful hash mechanism should be used to encrypt the version that is stored in the password database. &os; supports the DES, MD5, SHA256, SHA512, and Blowfish hash algorithms in its crypt() library. The default of SHA512 should not be changed to a less secure hashing algorithm, but can be changed to the more secure Blowfish algorithm. Blowfish is not part of AES and is not considered compliant with any Federal Information Processing Standards (FIPS). Its use may not be permitted in some environments. To determine which hash algorithm is used to encrypt a user's password, the superuser can view the hash for the user in the &os; password database. Each hash starts with a symbol which indicates the type of hash mechanism used to encrypt the password. If DES is used, there is no beginning symbol. For MD5, the symbol is $. For SHA256 and SHA512, the symbol is $6$. For Blowfish, the symbol is $2a$. In this example, the password for dru is hashed using the default SHA512 algorithm as the hash starts with $6$. Note that the encrypted hash, not the password itself, is stored in the password database: &prompt.root; grep dru /etc/master.passwd dru:$6$pzIjSvCAn.PBYQBA$PXpSeWPx3g5kscj3IMiM7tUEUSPmGexxta.8Lt9TGSi2lNQqYGKszsBPuGME0:1001:1001::0:0:dru:/usr/home/dru:/bin/csh The hash mechanism is set in the user's login class. For this example, the user is in the default login class and the hash algorithm is set with this line in /etc/login.conf: :passwd_format=sha512:\ To change the algorithm to Blowfish, modify that line to look like this: :passwd_format=blf:\ Then run cap_mkdb /etc/login.conf as described in . Note that this change will not affect any existing password hashes. This means that all passwords should be re-hashed by asking users to run passwd in order to change their password. For remote logins, two-factor authentication should be used. An example of two-factor authentication is something you have, such as a key, and something you know, such as the passphrase for that key. Since OpenSSH is part of the &os; base system, all network logins should be over an encrypted connection and use key-based authentication instead of passwords. For more information, refer to . Kerberos users may need to make additional changes to implement OpenSSH in their network. These changes are described in . Password Policy Enforcement Enforcing a strong password policy for local accounts is a fundamental aspect of system security. In &os;, password length, password strength, and password complexity can be implemented using built-in Pluggable Authentication Modules (PAM). This section demonstrates how to configure the minimum and maximum password length and the enforcement of mixed characters using the pam_passwdqc.so module. This module is enforced when a user changes their password. To configure this module, become the superuser and uncomment the line containing pam_passwdqc.so in /etc/pam.d/passwd. Then, edit that line to match the password policy: password requisite pam_passwdqc.so min=disabled,disabled,disabled,12,10 similar=deny retry=3 enforce=users This example sets several requirements for new passwords. The min setting controls the minimum password length. It has five values because this module defines five different types of passwords based on their complexity. Complexity is defined by the type of characters that must exist in a password, such as letters, numbers, symbols, and case. The types of passwords are described in &man.pam.passwdqc.8;. In this example, the first three types of passwords are disabled, meaning that passwords that meet those complexity requirements will not be accepted, regardless of their length. The 12 sets a minimum password policy of at least twelve characters, if the password also contains characters with three types of complexity. The 10 sets the password policy to also allow passwords of at least ten characters, if the password contains characters with four types of complexity. The similar setting denies passwords that are similar to the user's previous password. The retry setting provides a user with three opportunities to enter a new password. Once this file is saved, a user changing their password will see a message similar to the following: &prompt.user; passwd Changing local password for trhodes Old Password: You can now choose the new password. A valid password should be a mix of upper and lower case letters, digits and other characters. You can use a 12 character long password with characters from at least 3 of these 4 classes, or a 10 character long password containing characters from all the classes. Characters that form a common pattern are discarded by the check. Alternatively, if no one else can see your terminal now, you can pick this as your password: "trait-useful&knob". Enter new password: If a password that does not match the policy is entered, it will be rejected with a warning and the user will have an opportunity to try again, up to the configured number of retries. Most password policies require passwords to expire after so many days. To set a password age time in &os;, set for the user's login class in /etc/login.conf. The default login class contains an example: # :passwordtime=90d:\ So, to set an expiry of 90 days for this login class, remove the comment symbol (#), save the edit, and run cap_mkdb /etc/login.conf. To set the expiration on individual users, pass an expiration date or the number of days to expiry and a username to pw: &prompt.root; pw usermod -p 30-apr-2015 -n trhodes As seen here, an expiration date is set in the form of day, month, and year. For more information, see &man.pw.8;. Detecting Rootkits A rootkit is any unauthorized software that attempts to gain root access to a system. Once installed, this malicious software will normally open up another avenue of entry for an attacker. Realistically, once a system has been compromised by a rootkit and an investigation has been performed, the system should be reinstalled from scratch. There is tremendous risk that even the most prudent security or systems engineer will miss something an attacker left behind. A rootkit does do one thing useful for administrators: once detected, it is a sign that a compromise happened at some point. But, these types of applications tend to be very well hidden. This section demonstrates a tool that can be used to detect rootkits, security/rkhunter. After installation of this package or port, the system may be checked using the following command. It will produce a lot of information and will require some manual pressing of ENTER: &prompt.root; rkhunter -c After the process completes, a status message will be printed to the screen. This message will include the amount of files checked, suspect files, possible rootkits, and more. During the check, some generic security warnings may be produced about hidden files, the OpenSSH protocol selection, and known vulnerable versions of installed software. These can be handled now or after a more detailed analysis has been performed. Every administrator should know what is running on the systems they are responsible for. Third-party tools like rkhunter and sysutils/lsof, and native commands such as netstat and ps, can show a great deal of information on the system. Take notes on what is normal, ask questions when something seems out of place, and be paranoid. While preventing a compromise is ideal, detecting a compromise is a must. Binary Verification Verification of system files and binaries is important because it provides the system administration and security teams information about system changes. A software application that monitors the system for changes is called an Intrusion Detection System (IDS). &os; provides native support for a basic IDS system. While the nightly security emails will notify an administrator of changes, the information is stored locally and there is a chance that a malicious user could modify this information in order to hide their changes to the system. As such, it is recommended to create a separate set of binary signatures and store them on a read-only, root-owned directory or, preferably, on a removable USB disk or remote rsync server. The built-in mtree utility can be used to generate a specification of the contents of a directory. A seed, or a numeric constant, is used to generate the specification and is required to check that the specification has not changed. This makes it possible to determine if a file or binary has been modified. Since the seed value is unknown by an attacker, faking or checking the checksum values of files will be difficult to impossible. The following example generates a set of SHA256 hashes, one for each system binary in /bin, and saves those values to a hidden file in root's home directory, /root/.bin_chksum_mtree: &prompt.root; mtree -s 3483151339707503 -c -K cksum,sha256digest -p /bin > /root/.bin_chksum_mtree &prompt.root; mtree: /bin checksum: 3427012225 The 3483151339707503 represents the seed. This value should be remembered, but not shared. Viewing /root/.bin_cksum_mtree should yield output similar to the following: # user: root # machine: dreadnaught # tree: /bin # date: Mon Feb 3 10:19:53 2014 # . /set type=file uid=0 gid=0 mode=0555 nlink=1 flags=none . type=dir mode=0755 nlink=2 size=1024 \ time=1380277977.000000000 \133 nlink=2 size=11704 time=1380277977.000000000 \ cksum=484492447 \ sha256digest=6207490fbdb5ed1904441fbfa941279055c3e24d3a4049aeb45094596400662a cat size=12096 time=1380277975.000000000 cksum=3909216944 \ sha256digest=65ea347b9418760b247ab10244f47a7ca2a569c9836d77f074e7a306900c1e69 chflags size=8168 time=1380277975.000000000 cksum=3949425175 \ sha256digest=c99eb6fc1c92cac335c08be004a0a5b4c24a0c0ef3712017b12c89a978b2dac3 chio size=18520 time=1380277975.000000000 cksum=2208263309 \ sha256digest=ddf7c8cb92a58750a675328345560d8cc7fe14fb3ccd3690c34954cbe69fc964 chmod size=8640 time=1380277975.000000000 cksum=2214429708 \ sha256digest=a435972263bf814ad8df082c0752aa2a7bdd8b74ff01431ccbd52ed1e490bbe7 The machine's hostname, the date and time the specification was created, and the name of the user who created the specification are included in this report. There is a checksum, size, time, and SHA256 digest for each binary in the directory. To verify that the binary signatures have not changed, compare the current contents of the directory to the previously generated specification, and save the results to a file. This command requires the seed that was used to generate the original specification: &prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output &prompt.root; mtree: /bin checksum: 3427012225 This should produce the same checksum for /bin that was produced when the specification was created. If no changes have occurred to the binaries in this directory, the /root/.bin_chksum_output output file will be empty. To simulate a change, change the date on /bin/cat using touch and run the verification command again: &prompt.root; touch /bin/cat &prompt.root; mtree -s 3483151339707503 -p /bin < /root/.bin_chksum_mtree >> /root/.bin_chksum_output &prompt.root; more /root/.bin_chksum_output cat changed modification time expected Fri Sep 27 06:32:55 2013 found Mon Feb 3 10:28:43 2014 It is recommended to create specifications for the directories which contain binaries and configuration files, as well as any directories containing sensitive data. Typically, specifications are created for /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /etc, and /usr/local/etc. More advanced IDS systems exist, such as security/aide. In most cases, mtree provides the functionality administrators need. It is important to keep the seed value and the checksum output hidden from malicious users. More information about mtree can be found in &man.mtree.8;. System Tuning for Security In &os;, many system features can be tuned using sysctl. A few of the security features which can be tuned to prevent Denial of Service (DoS) attacks will be covered in this section. More information about using sysctl, including how to temporarily change values and how to make the changes permanent after testing, can be found in . Any time a setting is changed with sysctl, the chance to cause undesired harm is increased, affecting the availability of the system. All changes should be monitored and, if possible, tried on a testing system before being used on a production system. By default, the &os; kernel boots with a security level of -1. This is called insecure mode because immutable file flags may be turned off and all devices may be read from or written to. The security level will remain at -1 unless it is altered through sysctl or by a setting in the startup scripts. The security level may be increased during system startup by setting kern_securelevel_enable to YES in /etc/rc.conf, and the value of kern_securelevel to the desired security level. See &man.security.7; and &man.init.8; for more information on these settings and the available security levels. Increasing the securelevel can break Xorg and cause other issues. Be prepared to do some debugging. The net.inet.tcp.blackhole and net.inet.udp.blackhole settings can be used to drop incoming SYN packets on closed ports without sending a return RST response. The default behavior is to return an RST to show a port is closed. Changing the default provides some level of protection against ports scans, which are used to determine which applications are running on a system. Set net.inet.tcp.blackhole to 2 and net.inet.udp.blackhole to 1. Refer to &man.blackhole.4; for more information about these settings. The net.inet.icmp.drop_redirect and net.inet.ip.redirect settings help prevent against redirect attacks. A redirect attack is a type of DoS which sends mass numbers of ICMP type 5 packets. Since these packets are not required, set net.inet.icmp.drop_redirect to 1 and set net.inet.ip.redirect to 0. Source routing is a method for detecting and accessing non-routable addresses on the internal network. This should be disabled as non-routable addresses are normally not routable on purpose. To disable this feature, set net.inet.ip.sourceroute and net.inet.ip.accept_sourceroute to 0. When a machine on the network needs to send messages to all hosts on a subnet, an ICMP echo request message is sent to the broadcast address. However, there is no reason for an external host to perform such an action. To reject all external broadcast requests, set net.inet.icmp.bmcastecho to 0. Some additional settings are documented in &man.security.7;. One-time Passwords one-time passwords security one-time passwords By default, &os; includes support for One-time Passwords In Everything (OPIE). OPIE is designed to prevent replay attacks, in which an attacker discovers a user's password and uses it to access a system. Since a password is only used once in OPIE, a discovered password is of little use to an attacker. OPIE uses a secure hash and a challenge/response system to manage passwords. The &os; implementation uses the MD5 hash by default. OPIE uses three different types of passwords. The first is the usual &unix; or Kerberos password. The second is the one-time password which is generated by opiekey. The third type of password is the secret password which is used to generate one-time passwords. The secret password has nothing to do with, and should be different from, the &unix; password. There are two other pieces of data that are important to OPIE. One is the seed or key, consisting of two letters and five digits. The other is the iteration count, a number between 1 and 100. OPIE creates the one-time password by concatenating the seed and the secret password, applying the MD5 hash as many times as specified by the iteration count, and turning the result into six short English words which represent the one-time password. The authentication system keeps track of the last one-time password used, and the user is authenticated if the hash of the user-provided password is equal to the previous password. Because a one-way hash is used, it is impossible to generate future one-time passwords if a successfully used password is captured. The iteration count is decremented after each successful login to keep the user and the login program in sync. When the iteration count gets down to 1, OPIE must be reinitialized. There are a few programs involved in this process. A one-time password, or a consecutive list of one-time passwords, is generated by passing an iteration count, a seed, and a secret password to &man.opiekey.1;. In addition to initializing OPIE, &man.opiepasswd.1; is used to change passwords, iteration counts, or seeds. The relevant credential files in /etc/opiekeys are examined by &man.opieinfo.1; which prints out the invoking user's current iteration count and seed. This section describes four different sorts of operations. The first is how to set up one-time-passwords for the first time over a secure connection. The second is how to use opiepasswd over an insecure connection. The third is how to log in over an insecure connection. The fourth is how to generate a number of keys which can be written down or printed out to use at insecure locations. Initializing <acronym>OPIE</acronym> To initialize OPIE for the first time, run this command from a secure location: &prompt.user; opiepasswd -c Adding unfurl: Only use this method from the console; NEVER from remote. If you are using telnet, xterm, or a dial-in, type ^C now or exit with no password. Then run opiepasswd without the -c parameter. Using MD5 to compute responses. Enter new secret pass phrase: Again new secret pass phrase: ID unfurl OTP key is 499 to4268 MOS MALL GOAT ARM AVID COED The sets console mode which assumes that the command is being run from a secure location, such as a computer under the user's control or a SSH session to a computer under the user's control. When prompted, enter the secret password which will be used to generate the one-time login keys. This password should be difficult to guess and should be different than the password which is associated with the user's login account. It must be between 10 and 127 characters long. Remember this password. The ID line lists the login name (unfurl), default iteration count (499), and default seed (to4268). When logging in, the system will remember these parameters and display them, meaning that they do not have to be memorized. The last line lists the generated one-time password which corresponds to those parameters and the secret password. At the next login, use this one-time password. Insecure Connection Initialization To initialize or change the secret password on an insecure system, a secure connection is needed to some place where opiekey can be run. This might be a shell prompt on a trusted machine. An iteration count is needed, where 100 is probably a good value, and the seed can either be specified or the randomly-generated one used. On the insecure connection, the machine being initialized, use &man.opiepasswd.1;: &prompt.user; opiepasswd Updating unfurl: You need the response from an OTP generator. Old secret pass phrase: otp-md5 498 to4268 ext Response: GAME GAG WELT OUT DOWN CHAT New secret pass phrase: otp-md5 499 to4269 Response: LINE PAP MILK NELL BUOY TROY ID mark OTP key is 499 gr4269 LINE PAP MILK NELL BUOY TROY To accept the default seed, press Return. Before entering an access password, move over to the secure connection and give it the same parameters: &prompt.user; opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT Switch back over to the insecure connection, and copy the generated one-time password over to the relevant program. Generating a Single One-time Password After initializing OPIE and logging in, a prompt like this will be displayed: &prompt.user; telnet example.com Trying 10.0.0.1... Connected to example.com Escape character is '^]'. FreeBSD/i386 (example.com) (ttypa) login: <username> otp-md5 498 gr4269 ext Password: The OPIE prompts provides a useful feature. If Return is pressed at the password prompt, the prompt will turn echo on and display what is typed. This can be useful when attempting to type in a password by hand from a printout. MS-DOS Windows MacOS At this point, generate the one-time password to answer this login prompt. This must be done on a trusted system where it is safe to run &man.opiekey.1;. There are versions of this command for &windows;, &macos; and &os;. This command needs the iteration count and the seed as command line options. Use cut-and-paste from the login prompt on the machine being logged in to. On the trusted system: &prompt.user; opiekey 498 to4268 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: GAME GAG WELT OUT DOWN CHAT Once the one-time password is generated, continue to log in. Generating Multiple One-time Passwords Sometimes there is no access to a trusted machine or secure connection. In this case, it is possible to use &man.opiekey.1; to generate a number of one-time passwords beforehand. For example: &prompt.user; opiekey -n 5 30 zz99999 Using the MD5 algorithm to compute response. Reminder: Do not use opiekey from telnet or dial-in sessions. Enter secret pass phrase: <secret password> 26: JOAN BORE FOSS DES NAY QUIT 27: LATE BIAS SLAY FOLK MUCH TRIG 28: SALT TIN ANTI LOON NEAL USE 29: RIO ODIN GO BYE FURY TIC 30: GREW JIVE SAN GIRD BOIL PHI The requests five keys in sequence, and specifies what the last iteration number should be. Note that these are printed out in reverse order of use. The really paranoid might want to write the results down by hand; otherwise, print the list. Each line shows both the iteration count and the one-time password. Scratch off the passwords as they are used. Restricting Use of &unix; Passwords OPIE can restrict the use of &unix; passwords based on the IP address of a login session. The relevant file is /etc/opieaccess, which is present by default. Refer to &man.opieaccess.5; for more information on this file and which security considerations to be aware of when using it. Here is a sample opieaccess: permit 192.168.0.0 255.255.0.0 This line allows users whose IP source address (which is vulnerable to spoofing) matches the specified value and mask, to use &unix; passwords at any time. If no rules in opieaccess are matched, the default is to deny non-OPIE logins. TCP Wrapper TomRhodesWritten by TCP Wrapper TCP Wrapper is a host-based access control system which extends the abilities of . It can be configured to provide logging support, return messages, and connection restrictions for the server daemons under the control of inetd. Refer to &man.tcpd.8; for more information about TCP Wrapper and its features. TCP Wrapper should not be considered a replacement for a properly configured firewall. Instead, TCP Wrapper should be used in conjunction with a firewall and other security enhancements in order to provide another layer of protection in the implementation of a security policy. Initial Configuration To enable TCP Wrapper in &os;, add the following lines to /etc/rc.conf: inetd_enable="YES" inetd_flags="-Ww" Then, properly configure /etc/hosts.allow. Unlike other implementations of TCP Wrapper, the use of hosts.deny is deprecated in &os;. All configuration options should be placed in /etc/hosts.allow. In the simplest configuration, daemon connection policies are set to either permit or block, depending on the options in /etc/hosts.allow. The default configuration in &os; is to allow all connections to the daemons started with inetd. Basic configuration usually takes the form of daemon : address : action, where daemon is the daemon which inetd started, address is a valid hostname, IP address, or an IPv6 address enclosed in brackets ([ ]), and action is either allow or deny. TCP Wrapper uses a first rule match semantic, meaning that the configuration file is scanned from the beginning for a matching rule. When a match is found, the rule is applied and the search process stops. For example, to allow POP3 connections via the mail/qpopper daemon, the following lines should be appended to hosts.allow: # This line is required for POP3 connections: qpopper : ALL : allow Whenever this file is edited, restart inetd: &prompt.root; service inetd restart Advanced Configuration TCP Wrapper provides advanced options to allow more control over the way connections are handled. In some cases, it may be appropriate to return a comment to certain hosts or daemon connections. In other cases, a log entry should be recorded or an email sent to the administrator. Other situations may require the use of a service for local connections only. This is all possible through the use of configuration options known as wildcards, expansion characters, and external command execution. Suppose that a situation occurs where a connection should be denied yet a reason should be sent to the host who attempted to establish that connection. That action is possible with . When a connection attempt is made, executes a shell command or script. An example exists in hosts.allow: # The rest of the daemons are protected. ALL : ALL \ : severity auth.info \ : twist /bin/echo "You are not welcome to use %d from %h." In this example, the message You are not allowed to use daemon name from hostname. will be returned for any daemon not configured in hosts.allow. This is useful for sending a reply back to the connection initiator right after the established connection is dropped. Any message returned must be wrapped in quote (") characters. It may be possible to launch a denial of service attack on the server if an attacker floods these daemons with connection requests. Another possibility is to use . Like , implicitly denies the connection and may be used to run external shell commands or scripts. Unlike , will not send a reply back to the host who established the connection. For example, consider the following configuration: # We do not allow connections from example.com: ALL : .example.com \ : spawn (/bin/echo %a from %h attempted to access %d >> \ /var/log/connections.log) \ : deny This will deny all connection attempts from *.example.com and log the hostname, IP address, and the daemon to which access was attempted to /var/log/connections.log. This example uses the substitution characters %a and %h. Refer to &man.hosts.access.5; for the complete list. To match every instance of a daemon, domain, or IP address, use ALL. Another wildcard is PARANOID which may be used to match any host which provides an IP address that may be forged because the IP address differs from its resolved hostname. In this example, all connection requests to Sendmail which have an IP address that varies from its hostname will be denied: # Block possibly spoofed requests to sendmail: sendmail : PARANOID : deny Using the PARANOID wildcard will result in denied connections if the client or server has a broken DNS setup. To learn more about wildcards and their associated functionality, refer to &man.hosts.access.5;. When adding new configuration lines, make sure that any unneeded entries for that daemon are commented out in hosts.allow. <application>Kerberos</application> Tillman Hodgson Contributed by Mark Murray Based on a contribution by Kerberos is a network authentication protocol which was originally created by the Massachusetts Institute of Technology (MIT) as a way to securely provide authentication across a potentially hostile network. The Kerberos protocol uses strong cryptography so that both a client and server can prove their identity without sending any unencrypted secrets over the network. Kerberos can be described as an identity-verifying proxy system and as a trusted third-party authentication system. After a user authenticates with Kerberos, their communications can be encrypted to assure privacy and data integrity. The only function of Kerberos is to provide the secure authentication of users and servers on the network. It does not provide authorization or auditing functions. It is recommended that Kerberos be used with other security methods which provide authorization and audit services. The current version of the protocol is version 5, described in RFC 4120. Several free implementations of this protocol are available, covering a wide range of operating systems. MIT continues to develop their Kerberos package. It is commonly used in the US as a cryptography product, and has historically been subject to US export regulations. In &os;, MIT Kerberos is available as the security/krb5 package or port. The Heimdal Kerberos implementation was explicitly developed outside of the US to avoid export regulations. The Heimdal Kerberos distribution is included in the base &os; installation, and another distribution with more configurable options is available as security/heimdal in the Ports Collection. In Kerberos users and services are identified as principals which are contained within an administrative grouping, called a realm. A typical user principal would be of the form user@REALM (realms are traditionally uppercase). This section provides a guide on how to set up Kerberos using the Heimdal distribution included in &os;. For purposes of demonstrating a Kerberos installation, the name spaces will be as follows: The DNS domain (zone) will be example.org. The Kerberos realm will be EXAMPLE.ORG. Use real domain names when setting up Kerberos, even if it will run internally. This avoids DNS problems and assures inter-operation with other Kerberos realms. Setting up a Heimdal <acronym>KDC</acronym> Kerberos5 Key Distribution Center The Key Distribution Center (KDC) is the centralized authentication service that Kerberos provides, the trusted third party of the system. It is the computer that issues Kerberos tickets, which are used for clients to authenticate to servers. Because the KDC is considered trusted by all other computers in the Kerberos realm, it has heightened security concerns. Direct access to the KDC should be limited. While running a KDC requires few computing resources, a dedicated machine acting only as a KDC is recommended for security reasons. To begin setting up a KDC, add these lines to /etc/rc.conf: kdc_enable="YES" kadmind_enable="YES" Next, edit /etc/krb5.conf as follows: [libdefaults] default_realm = EXAMPLE.ORG [realms] EXAMPLE.ORG = { kdc = kerberos.example.org admin_server = kerberos.example.org } [domain_realm] .example.org = EXAMPLE.ORG In this example, the KDC will use the fully-qualified hostname kerberos.example.org. The hostname of the KDC must be resolvable in the DNS. Kerberos can also use the DNS to locate KDCs, instead of a [realms] section in /etc/krb5.conf. For large organizations that have their own DNS servers, the above example could be trimmed to: [libdefaults] default_realm = EXAMPLE.ORG [domain_realm] .example.org = EXAMPLE.ORG With the following lines being included in the example.org zone file: _kerberos._udp IN SRV 01 00 88 kerberos.example.org. _kerberos._tcp IN SRV 01 00 88 kerberos.example.org. _kpasswd._udp IN SRV 01 00 464 kerberos.example.org. _kerberos-adm._tcp IN SRV 01 00 749 kerberos.example.org. _kerberos IN TXT EXAMPLE.ORG In order for clients to be able to find the Kerberos services, they must have either a fully configured /etc/krb5.conf or a minimally configured /etc/krb5.conf and a properly configured DNS server. Next, create the Kerberos database which contains the keys of all principals (users and hosts) encrypted with a master password. It is not required to remember this password as it will be stored in /var/heimdal/m-key; it would be reasonable to use a 45-character random password for this purpose. To create the master key, run kstash and enter a password: &prompt.root; kstash Master key: xxxxxxxxxxxxxxxxxxxxxxx Verifying password - Master key: xxxxxxxxxxxxxxxxxxxxxxx Once the master key has been created, the database should be initialized. The Kerberos administrative tool &man.kadmin.8; can be used on the KDC in a mode that operates directly on the database, without using the &man.kadmind.8; network service, as kadmin -l. This resolves the chicken-and-egg problem of trying to connect to the database before it is created. At the kadmin prompt, use init to create the realm's initial database: &prompt.root; kadmin -l kadmin> init EXAMPLE.ORG Realm max ticket life [unlimited]: Lastly, while still in kadmin, create the first principal using add. Stick to the default options for the principal for now, as these can be changed later with modify. Type ? at the prompt to see the available options. kadmin> add tillman Max ticket life [unlimited]: Max renewable life [unlimited]: Attributes []: Password: xxxxxxxx Verifying password - Password: xxxxxxxx Next, start the KDC services by running service kdc start and service kadmind start. While there will not be any kerberized daemons running at this point, it is possible to confirm that the KDC is functioning by obtaining a ticket for the principal that was just created: &prompt.user; kinit tillman tillman@EXAMPLE.ORG's Password: Confirm that a ticket was successfully obtained using klist: &prompt.user; klist Credentials cache: FILE:/tmp/krb5cc_1001 Principal: tillman@EXAMPLE.ORG Issued Expires Principal Aug 27 15:37:58 2013 Aug 28 01:37:58 2013 krbtgt/EXAMPLE.ORG@EXAMPLE.ORG The temporary ticket can be destroyed when the test is finished: &prompt.user; kdestroy Configuring a Server to Use <application>Kerberos</application> Kerberos5 enabling services The first step in configuring a server to use Kerberos authentication is to ensure that it has the correct configuration in /etc/krb5.conf. The version from the KDC can be used as-is, or it can be regenerated on the new system. Next, create /etc/krb5.keytab on the server. This is the main part of Kerberizing a service — it corresponds to generating a secret shared between the service and the KDC. The secret is a cryptographic key, stored in a keytab. The keytab contains the server's host key, which allows it and the KDC to verify each others' identity. It must be transmitted to the server in a secure fashion, as the security of the server can be broken if the key is made public. Typically, the keytab is generated on an administrator's trusted machine using kadmin, then securely transferred to the server, e.g., with &man.scp.1;; it can also be created directly on the server if that is consistent with the desired security policy. It is very important that the keytab is transmitted to the server in a secure fashion: if the key is known by some other party, that party can impersonate any user to the server! Using kadmin on the server directly is convenient, because the entry for the host principal in the KDC database is also created using kadmin. Of course, kadmin is a kerberized service; a Kerberos ticket is needed to authenticate to the network service, but to ensure that the user running kadmin is actually present (and their session has not been hijacked), kadmin will prompt for the password to get a fresh ticket. The principal authenticating to the kadmin service must be permitted to use the kadmin interface, as specified in kadmind.acl. See the section titled Remote administration in info heimdal for details on designing access control lists. Instead of enabling remote kadmin access, the administrator could securely connect to the KDC via the local console or &man.ssh.1;, and perform administration locally using kadmin -l. After installing /etc/krb5.conf, use add --random-key in kadmin. This adds the server's host principal to the database, but does not extract a copy of the host principal key to a keytab. To generate the keytab, use ext to extract the server's host principal key to its own keytab: &prompt.root; kadmin kadmin> add --random-key host/myserver.example.org Max ticket life [unlimited]: Max renewable life [unlimited]: Principal expiration time [never]: Password expiration time [never]: Attributes []: kadmin> ext_keytab host/myserver.example.org kadmin> exit Note that ext_keytab stores the extracted key in /etc/krb5.keytab by default. This is good when being run on the server being kerberized, but the --keytab path/to/file argument should be used when the keytab is being extracted elsewhere: &prompt.root; kadmin kadmin> ext_keytab --keytab=/tmp/example.keytab host/myserver.example.org kadmin> exit The keytab can then be securely copied to the server using &man.scp.1; or a removable media. Be sure to specify a non-default keytab name to avoid inserting unneeded keys into the system's keytab. At this point, the server can read encrypted messages from the KDC using its shared key, stored in krb5.keytab. It is now ready for the Kerberos-using services to be enabled. One of the most common such services is &man.sshd.8;, which supports Kerberos via the GSS-API. In /etc/ssh/sshd_config, add the line: GSSAPIAuthentication yes After making this change, &man.sshd.8; must be restarted for the new configuration to take effect: service sshd restart. Configuring a Client to Use <application>Kerberos</application> Kerberos5 configure clients As it was for the server, the client requires configuration in /etc/krb5.conf. Copy the file in place (securely) or re-enter it as needed. Test the client by using kinit, klist, and kdestroy from the client to obtain, show, and then delete a ticket for an existing principal. Kerberos applications should also be able to connect to Kerberos enabled servers. If that does not work but obtaining a ticket does, the problem is likely with the server and not with the client or the KDC. In the case of kerberized &man.ssh.1;, GSS-API is disabled by default, so test using ssh -o GSSAPIAuthentication=yes hostname. When testing a Kerberized application, try using a packet sniffer such as tcpdump to confirm that no sensitive information is sent in the clear. Various Kerberos client applications are available. With the advent of a bridge so that applications using SASL for authentication can use GSS-API mechanisms as well, large classes of client applications can use Kerberos for authentication, from Jabber clients to IMAP clients. .k5login .k5users Users within a realm typically have their Kerberos principal mapped to a local user account. Occasionally, one needs to grant access to a local user account to someone who does not have a matching Kerberos principal. For example, tillman@EXAMPLE.ORG may need access to the local user account webdevelopers. Other principals may also need access to that local account. The .k5login and .k5users files, placed in a user's home directory, can be used to solve this problem. For example, if the following .k5login is placed in the home directory of webdevelopers, both principals listed will have access to that account without requiring a shared password: tillman@example.org jdoe@example.org Refer to &man.ksu.1; for more information about .k5users. <acronym>MIT</acronym> Differences The major difference between the MIT and Heimdal implementations is that kadmin has a different, but equivalent, set of commands and uses a different protocol. If the KDC is MIT, the Heimdal version of kadmin cannot be used to administer the KDC remotely, and vice versa. Client applications may also use slightly different command line options to accomplish the same tasks. Following the instructions at http://web.mit.edu/Kerberos/www/ is recommended. Be careful of path issues: the MIT port installs into /usr/local/ by default, and the &os; system applications run instead of the MIT versions if PATH lists the system directories first. When using MIT Kerberos as a KDC on &os;, the following edits should also be made to rc.conf: kerberos5_server="/usr/local/sbin/krb5kdc" kadmind5_server="/usr/local/sbin/kadmind" kerberos5_server_flags="" kerberos5_server_enable="YES" kadmind5_server_enable="YES" <application>Kerberos</application> Tips, Tricks, and Troubleshooting When configuring and troubleshooting Kerberos, keep the following points in mind: When using either Heimdal or MIT Kerberos from ports, ensure that the PATH lists the port's versions of the client applications before the system versions. If all the computers in the realm do not have synchronized time settings, authentication may fail. describes how to synchronize clocks using NTP. If the hostname is changed, the host/ principal must be changed and the keytab updated. This also applies to special keytab entries like the HTTP/ principal used for Apache's www/mod_auth_kerb. All hosts in the realm must be both forward and reverse resolvable in DNS or, at a minimum, exist in /etc/hosts. CNAMEs will work, but the A and PTR records must be correct and in place. The error message for unresolvable hosts is not intuitive: Kerberos5 refuses authentication because Read req failed: Key table entry not found. Some operating systems that act as clients to the KDC do not set the permissions for ksu to be setuid root. This means that ksu does not work. This is a permissions problem, not a KDC error. With MIT Kerberos, to allow a principal to have a ticket life longer than the default lifetime of ten hours, use modify_principal at the &man.kadmin.8; prompt to change the maxlife of both the principal in question and the krbtgt principal. The principal can then use kinit -l to request a ticket with a longer lifetime. When running a packet sniffer on the KDC to aid in troubleshooting while running kinit from a workstation, the Ticket Granting Ticket (TGT) is sent immediately, even before the password is typed. This is because the Kerberos server freely transmits a TGT to any unauthorized request. However, every TGT is encrypted in a key derived from the user's password. When a user types their password, it is not sent to the KDC, it is instead used to decrypt the TGT that kinit already obtained. If the decryption process results in a valid ticket with a valid time stamp, the user has valid Kerberos credentials. These credentials include a session key for establishing secure communications with the Kerberos server in the future, as well as the actual TGT, which is encrypted with the Kerberos server's own key. This second layer of encryption allows the Kerberos server to verify the authenticity of each TGT. Host principals can have a longer ticket lifetime. If the user principal has a lifetime of a week but the host being connected to has a lifetime of nine hours, the user cache will have an expired host principal and the ticket cache will not work as expected. When setting up krb5.dict to prevent specific bad passwords from being used as described in &man.kadmind.8;, remember that it only applies to principals that have a password policy assigned to them. The format used in krb5.dict is one string per line. Creating a symbolic link to /usr/share/dict/words might be useful. Mitigating <application>Kerberos</application> Limitations Kerberos5 limitations and shortcomings Since Kerberos is an all or nothing approach, every service enabled on the network must either be modified to work with Kerberos or be otherwise secured against network attacks. This is to prevent user credentials from being stolen and re-used. An example is when Kerberos is enabled on all remote shells but the non-Kerberized POP3 mail server sends passwords in plain text. The KDC is a single point of failure. By design, the KDC must be as secure as its master password database. The KDC should have absolutely no other services running on it and should be physically secure. The danger is high because Kerberos stores all passwords encrypted with the same master key which is stored as a file on the KDC. A compromised master key is not quite as bad as one might fear. The master key is only used to encrypt the Kerberos database and as a seed for the random number generator. As long as access to the KDC is secure, an attacker cannot do much with the master key. If the KDC is unavailable, network services are unusable as authentication cannot be performed. This can be alleviated with a single master KDC and one or more slaves, and with careful implementation of secondary or fall-back authentication using PAM. Kerberos allows users, hosts and services to authenticate between themselves. It does not have a mechanism to authenticate the KDC to the users, hosts, or services. This means that a trojanned kinit could record all user names and passwords. File system integrity checking tools like security/tripwire can alleviate this. Resources and Further Information Kerberos5 external resources The Kerberos FAQ Designing an Authentication System: a Dialog in Four Scenes RFC 4120, The Kerberos Network Authentication Service (V5) MIT Kerberos home page Heimdal Kerberos home page OpenSSL TomRhodesWritten by security OpenSSL OpenSSL is an open source implementation of the SSL and TLS protocols. It provides an encryption transport layer on top of the normal communications layer, allowing it to be intertwined with many network applications and services. The version of OpenSSL included in &os; supports the Secure Sockets Layer 3.0 (SSLv3) and Transport Layer Security 1.0/1.1/1.2 (TLSv1/TLSv1.1/TLSv1.2) network security protocols and can be used as a general cryptographic library. In &os; 12.0-RELEASE and above, OpenSSL also supports Transport Layer Security 1.3 (TLSv1.3). OpenSSL is often used to encrypt authentication of mail clients and to secure web based transactions such as credit card payments. Some ports, such as www/apache24 and databases/postgresql11-server, include a compile option for building with OpenSSL. If selected, the port will add support using OpenSSL from the base system. To instead have the port compile against OpenSSL from the security/openssl port, add the following to /etc/make.conf: DEFAULT_VERSIONS+= ssl=openssl Another common use of OpenSSL is to provide certificates for use with software applications. Certificates can be used to verify the credentials of a company or individual. If a certificate has not been signed by an external Certificate Authority (CA), such as http://www.verisign.com, the application that uses the certificate will produce a warning. There is a cost associated with obtaining a signed certificate and using a signed certificate is not mandatory as certificates can be self-signed. However, using an external authority will prevent warnings and can put users at ease. This section demonstrates how to create and use certificates on a &os; system. Refer to for an example of how to create a CA for signing one's own certificates. For more information about SSL, read the free OpenSSL Cookbook. Generating Certificates OpenSSL certificate generation To generate a certificate that will be signed by an external CA, issue the following command and input the information requested at the prompts. This input information will be written to the certificate. At the Common Name prompt, input the fully qualified name for the system that will use the certificate. If this name does not match the server, the application verifying the certificate will issue a warning to the user, rendering the verification provided by the certificate as useless. &prompt.root; openssl req -new -nodes -out req.pem -keyout cert.key -sha256 -newkey rsa:2048 Generating a 2048 bit RSA private key ..................+++ .............................................................+++ writing new private key to 'cert.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (eg, city) []:Pittsburgh Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []:Systems Administrator Common Name (eg, YOUR name) []:localhost.example.org Email Address []:trhodes@FreeBSD.org Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:Another Name Other options, such as the expire time and alternate encryption algorithms, are available when creating a certificate. A complete list of options is described in &man.openssl.1;. This command will create two files in the current directory. The certificate request, req.pem, can be sent to a CA who will validate the entered credentials, sign the request, and return the signed certificate. The second file, cert.key, is the private key for the certificate and should be stored in a secure location. If this falls in the hands of others, it can be used to impersonate the user or the server. Alternately, if a signature from a CA is not required, a self-signed certificate can be created. First, generate the RSA key: &prompt.root; openssl genrsa -rand -genkey -out cert.key 2048 0 semi-random bytes loaded Generating RSA private key, 2048 bit long modulus .............................................+++ .................................................................................................................+++ e is 65537 (0x10001) Use this key to create a self-signed certificate. Follow the usual prompts for creating a certificate: &prompt.root; openssl req -new -x509 -days 365 -key cert.key -out cert.crt -sha256 You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:PA Locality Name (eg, city) []:Pittsburgh Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Organizational Unit Name (eg, section) []:Systems Administrator Common Name (e.g. server FQDN or YOUR name) []:localhost.example.org Email Address []:trhodes@FreeBSD.org This will create two new files in the current directory: a private key file cert.key, and the certificate itself, cert.crt. These should be placed in a directory, preferably under /etc/ssl/, which is readable only by root. Permissions of 0700 are appropriate for these files and can be set using chmod. Using Certificates One use for a certificate is to encrypt connections to the Sendmail mail server in order to prevent the use of clear text authentication. Some mail clients will display an error if the user has not installed a local copy of the certificate. Refer to the documentation included with the software for more information on certificate installation. In &os; 10.0-RELEASE and above, it is possible to create a self-signed certificate for Sendmail automatically. To enable this, add the following lines to /etc/rc.conf: sendmail_enable="YES" sendmail_cert_create="YES" sendmail_cert_cn="localhost.example.org" This will automatically create a self-signed certificate, /etc/mail/certs/host.cert, a signing key, /etc/mail/certs/host.key, and a CA certificate, /etc/mail/certs/cacert.pem. The certificate will use the Common Name specified in . After saving the edits, restart Sendmail: &prompt.root; service sendmail restart If all went well, there will be no error messages in /var/log/maillog. For a simple test, connect to the mail server's listening port using telnet: &prompt.root; telnet example.com 25 Trying 192.0.34.166... Connected to example.com. Escape character is '^]'. 220 example.com ESMTP Sendmail 8.14.7/8.14.7; Fri, 18 Apr 2014 11:50:32 -0400 (EDT) ehlo example.com 250-example.com Hello example.com [192.0.34.166], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-AUTH LOGIN PLAIN 250-STARTTLS 250-DELIVERBY 250 HELP quit 221 2.0.0 example.com closing connection Connection closed by foreign host. If the STARTTLS line appears in the output, everything is working correctly. <acronym>VPN</acronym> over <acronym>IPsec</acronym> Nik Clayton
nik@FreeBSD.org
Written by
Hiten M. Pandya
hmp@FreeBSD.org
Written by
IPsec Internet Protocol Security (IPsec) is a set of protocols which sit on top of the Internet Protocol (IP) layer. It allows two or more hosts to communicate in a secure manner by authenticating and encrypting each IP packet of a communication session. The &os; IPsec network stack is based on the http://www.kame.net/ implementation and supports both IPv4 and IPv6 sessions. IPsec ESP IPsec AH IPsec is comprised of the following sub-protocols: Encapsulated Security Payload (ESP): this protocol protects the IP packet data from third party interference by encrypting the contents using symmetric cryptography algorithms such as Blowfish and 3DES. Authentication Header (AH): this protocol protects the IP packet header from third party interference and spoofing by computing a cryptographic checksum and hashing the IP packet header fields with a secure hashing function. This is then followed by an additional header that contains the hash, to allow the information in the packet to be authenticated. IP Payload Compression Protocol (IPComp): this protocol tries to increase communication performance by compressing the IP payload in order to reduce the amount of data sent. These protocols can either be used together or separately, depending on the environment. VPN virtual private network VPN IPsec supports two modes of operation. The first mode, Transport Mode, protects communications between two hosts. The second mode, Tunnel Mode, is used to build virtual tunnels, commonly known as Virtual Private Networks (VPNs). Consult &man.ipsec.4; for detailed information on the IPsec subsystem in &os;. IPsec support is enabled by default on &os; 11 and later. For previous versions of &os;, add these options to a custom kernel configuration file and rebuild the kernel using the instructions in : kernel options IPSEC options IPSEC #IP security device crypto kernel options IPSEC_DEBUG If IPsec debugging support is desired, the following kernel option should also be added: options IPSEC_DEBUG #debug for IP security This rest of this chapter demonstrates the process of setting up an IPsec VPN between a home network and a corporate network. In the example scenario: Both sites are connected to the Internet through a gateway that is running &os;. The gateway on each network has at least one external IP address. In this example, the corporate LAN's external IP address is 172.16.5.4 and the home LAN's external IP address is 192.168.1.12. The internal addresses of the two networks can be either public or private IP addresses. However, the address space must not collide. For example, both networks cannot use 192.168.1.x. In this example, the corporate LAN's internal IP address is 10.246.38.1 and the home LAN's internal IP address is 10.0.0.5. Configuring a <acronym>VPN</acronym> on &os; Tom Rhodes
trhodes@FreeBSD.org
Written by
To begin, security/ipsec-tools must be installed from the Ports Collection. This software provides a number of applications which support the configuration. The next requirement is to create two &man.gif.4; pseudo-devices which will be used to tunnel packets and allow both networks to communicate properly. As root, run the following commands, replacing internal and external with the real IP addresses of the internal and external interfaces of the two gateways: &prompt.root; ifconfig gif0 create &prompt.root; ifconfig gif0 internal1 internal2 &prompt.root; ifconfig gif0 tunnel external1 external2 Verify the setup on each gateway, using ifconfig. Here is the output from Gateway 1: gif0: flags=8051 mtu 1280 tunnel inet 172.16.5.4 --> 192.168.1.12 inet6 fe80::2e0:81ff:fe02:5881%gif0 prefixlen 64 scopeid 0x6 inet 10.246.38.1 --> 10.0.0.5 netmask 0xffffff00 Here is the output from Gateway 2: gif0: flags=8051 mtu 1280 tunnel inet 192.168.1.12 --> 172.16.5.4 inet 10.0.0.5 --> 10.246.38.1 netmask 0xffffff00 inet6 fe80::250:bfff:fe3a:c1f%gif0 prefixlen 64 scopeid 0x4 Once complete, both internal IP addresses should be reachable using &man.ping.8;: priv-net# ping 10.0.0.5 PING 10.0.0.5 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: icmp_seq=0 ttl=64 time=42.786 ms 64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=19.255 ms 64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=20.440 ms 64 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=21.036 ms --- 10.0.0.5 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max/stddev = 19.255/25.879/42.786/9.782 ms corp-net# ping 10.246.38.1 PING 10.246.38.1 (10.246.38.1): 56 data bytes 64 bytes from 10.246.38.1: icmp_seq=0 ttl=64 time=28.106 ms 64 bytes from 10.246.38.1: icmp_seq=1 ttl=64 time=42.917 ms 64 bytes from 10.246.38.1: icmp_seq=2 ttl=64 time=127.525 ms 64 bytes from 10.246.38.1: icmp_seq=3 ttl=64 time=119.896 ms 64 bytes from 10.246.38.1: icmp_seq=4 ttl=64 time=154.524 ms --- 10.246.38.1 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 28.106/94.594/154.524/49.814 ms As expected, both sides have the ability to send and receive ICMP packets from the privately configured addresses. Next, both gateways must be told how to route packets in order to correctly send traffic from either network. The following commands will achieve this goal: corp-net&prompt.root; route add 10.0.0.0 10.0.0.5 255.255.255.0 corp-net&prompt.root; route add net 10.0.0.0: gateway 10.0.0.5 priv-net&prompt.root; route add 10.246.38.0 10.246.38.1 255.255.255.0 priv-net&prompt.root; route add host 10.246.38.0: gateway 10.246.38.1 At this point, internal machines should be reachable from each gateway as well as from machines behind the gateways. Again, use &man.ping.8; to confirm: corp-net# ping 10.0.0.8 PING 10.0.0.8 (10.0.0.8): 56 data bytes 64 bytes from 10.0.0.8: icmp_seq=0 ttl=63 time=92.391 ms 64 bytes from 10.0.0.8: icmp_seq=1 ttl=63 time=21.870 ms 64 bytes from 10.0.0.8: icmp_seq=2 ttl=63 time=198.022 ms 64 bytes from 10.0.0.8: icmp_seq=3 ttl=63 time=22.241 ms 64 bytes from 10.0.0.8: icmp_seq=4 ttl=63 time=174.705 ms --- 10.0.0.8 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 21.870/101.846/198.022/74.001 ms priv-net# ping 10.246.38.107 PING 10.246.38.1 (10.246.38.107): 56 data bytes 64 bytes from 10.246.38.107: icmp_seq=0 ttl=64 time=53.491 ms 64 bytes from 10.246.38.107: icmp_seq=1 ttl=64 time=23.395 ms 64 bytes from 10.246.38.107: icmp_seq=2 ttl=64 time=23.865 ms 64 bytes from 10.246.38.107: icmp_seq=3 ttl=64 time=21.145 ms 64 bytes from 10.246.38.107: icmp_seq=4 ttl=64 time=36.708 ms --- 10.246.38.107 ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max/stddev = 21.145/31.721/53.491/12.179 ms Setting up the tunnels is the easy part. Configuring a secure link is a more in depth process. The following configuration uses pre-shared (PSK) RSA keys. Other than the IP addresses, the /usr/local/etc/racoon/racoon.conf on both gateways will be identical and look similar to: path pre_shared_key "/usr/local/etc/racoon/psk.txt"; #location of pre-shared key file log debug; #log verbosity setting: set to 'notify' when testing and debugging is complete padding # options are not to be changed { maximum_length 20; randomize off; strict_check off; exclusive_tail off; } timer # timing options. change as needed { counter 5; interval 20 sec; persend 1; # natt_keepalive 15 sec; phase1 30 sec; phase2 15 sec; } listen # address [port] that racoon will listen on { isakmp 172.16.5.4 [500]; isakmp_natt 172.16.5.4 [4500]; } remote 192.168.1.12 [500] { exchange_mode main,aggressive; doi ipsec_doi; situation identity_only; my_identifier address 172.16.5.4; peers_identifier address 192.168.1.12; lifetime time 8 hour; passive off; proposal_check obey; # nat_traversal off; generate_policy off; proposal { encryption_algorithm blowfish; hash_algorithm md5; authentication_method pre_shared_key; lifetime time 30 sec; dh_group 1; } } sainfo (address 10.246.38.0/24 any address 10.0.0.0/24 any) # address $network/$netmask $type address $network/$netmask $type ( $type being any or esp) { # $network must be the two internal networks you are joining. pfs_group 1; lifetime time 36000 sec; encryption_algorithm blowfish,3des; authentication_algorithm hmac_md5,hmac_sha1; compression_algorithm deflate; } For descriptions of each available option, refer to the manual page for racoon.conf. The Security Policy Database (SPD) needs to be configured so that &os; and racoon are able to encrypt and decrypt network traffic between the hosts. This can be achieved with a shell script, similar to the following, on the corporate gateway. This file will be used during system initialization and should be saved as /usr/local/etc/racoon/setkey.conf. flush; spdflush; # To the home network spdadd 10.246.38.0/24 10.0.0.0/24 any -P out ipsec esp/tunnel/172.16.5.4-192.168.1.12/use; spdadd 10.0.0.0/24 10.246.38.0/24 any -P in ipsec esp/tunnel/192.168.1.12-172.16.5.4/use; Once in place, racoon may be started on both gateways using the following command: &prompt.root; /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf -l /var/log/racoon.log The output should be similar to the following: corp-net# /usr/local/sbin/racoon -F -f /usr/local/etc/racoon/racoon.conf Foreground mode. 2006-01-30 01:35:47: INFO: begin Identity Protection mode. 2006-01-30 01:35:48: INFO: received Vendor ID: KAME/racoon 2006-01-30 01:35:55: INFO: received Vendor ID: KAME/racoon 2006-01-30 01:36:04: INFO: ISAKMP-SA established 172.16.5.4[500]-192.168.1.12[500] spi:623b9b3bd2492452:7deab82d54ff704a 2006-01-30 01:36:05: INFO: initiate new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0] 2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=28496098(0x1b2d0e2) 2006-01-30 01:36:09: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=47784998(0x2d92426) 2006-01-30 01:36:13: INFO: respond new phase 2 negotiation: 172.16.5.4[0]192.168.1.12[0] 2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 192.168.1.12[0]->172.16.5.4[0] spi=124397467(0x76a279b) 2006-01-30 01:36:18: INFO: IPsec-SA established: ESP/Tunnel 172.16.5.4[0]->192.168.1.12[0] spi=175852902(0xa7b4d66) To ensure the tunnel is working properly, switch to another console and use &man.tcpdump.1; to view network traffic using the following command. Replace em0 with the network interface card as required: &prompt.root; tcpdump -i em0 host 172.16.5.4 and dst 192.168.1.12 Data similar to the following should appear on the console. If not, there is an issue and debugging the returned data will be required. 01:47:32.021683 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xa) 01:47:33.022442 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xb) 01:47:34.024218 IP corporatenetwork.com > 192.168.1.12.privatenetwork.com: ESP(spi=0x02acbf9f,seq=0xc) At this point, both networks should be available and seem to be part of the same network. Most likely both networks are protected by a firewall. To allow traffic to flow between them, rules need to be added to pass packets. For the &man.ipfw.8; firewall, add the following lines to the firewall configuration file: ipfw add 00201 allow log esp from any to any ipfw add 00202 allow log ah from any to any ipfw add 00203 allow log ipencap from any to any ipfw add 00204 allow log udp from any 500 to any The rule numbers may need to be altered depending on the current host configuration. For users of &man.pf.4; or &man.ipf.8;, the following rules should do the trick: pass in quick proto esp from any to any pass in quick proto ah from any to any pass in quick proto ipencap from any to any pass in quick proto udp from any port = 500 to any port = 500 pass in quick on gif0 from any to any pass out quick proto esp from any to any pass out quick proto ah from any to any pass out quick proto ipencap from any to any pass out quick proto udp from any port = 500 to any port = 500 pass out quick on gif0 from any to any Finally, to allow the machine to start support for the VPN during system initialization, add the following lines to /etc/rc.conf: ipsec_enable="YES" ipsec_program="/usr/local/sbin/setkey" ipsec_file="/usr/local/etc/racoon/setkey.conf" # allows setting up spd policies on boot racoon_enable="yes"
OpenSSH ChernLeeContributed by OpenSSH security OpenSSH OpenSSH is a set of network connectivity tools used to provide secure access to remote machines. Additionally, TCP/IP connections can be tunneled or forwarded securely through SSH connections. OpenSSH encrypts all traffic to effectively eliminate eavesdropping, connection hijacking, and other network-level attacks. OpenSSH is maintained by the OpenBSD project and is installed by default in &os;. It is compatible with both SSH version 1 and 2 protocols. When data is sent over the network in an unencrypted form, network sniffers anywhere in between the client and server can steal user/password information or data transferred during the session. OpenSSH offers a variety of authentication and encryption methods to prevent this from happening. More information about OpenSSH is available from http://www.openssh.com/. This section provides an overview of the built-in client utilities to securely access other systems and securely transfer files from a &os; system. It then describes how to configure a SSH server on a &os; system. More information is available in the man pages mentioned in this chapter. Using the SSH Client Utilities OpenSSH client To log into a SSH server, use ssh and specify a username that exists on that server and the IP address or hostname of the server. If this is the first time a connection has been made to the specified server, the user will be prompted to first verify the server's fingerprint: &prompt.root; ssh user@example.com The authenticity of host 'example.com (10.0.0.1)' can't be established. ECDSA key fingerprint is 25:cc:73:b5:b3:96:75:3d:56:19:49:d2:5c:1f:91:3b. Are you sure you want to continue connecting (yes/no)? yes Permanently added 'example.com' (ECDSA) to the list of known hosts. Password for user@example.com: user_password SSH utilizes a key fingerprint system to verify the authenticity of the server when the client connects. When the user accepts the key's fingerprint by typing yes when connecting for the first time, a copy of the key is saved to .ssh/known_hosts in the user's home directory. Future attempts to login are verified against the saved key and ssh will display an alert if the server's key does not match the saved key. If this occurs, the user should first verify why the key has changed before continuing with the connection. By default, recent versions of OpenSSH only accept SSHv2 connections. By default, the client will use version 2 if possible and will fall back to version 1 if the server does not support version 2. To force ssh to only use the specified protocol, include or . Additional options are described in &man.ssh.1;. OpenSSH secure copy &man.scp.1; Use &man.scp.1; to securely copy a file to or from a remote machine. This example copies COPYRIGHT on the remote system to a file of the same name in the current directory of the local system: &prompt.root; scp user@example.com:/COPYRIGHT COPYRIGHT Password for user@example.com: ******* COPYRIGHT 100% |*****************************| 4735 00:00 &prompt.root; Since the fingerprint was already verified for this host, the server's key is automatically checked before prompting for the user's password. The arguments passed to scp are similar to cp. The file or files to copy is the first argument and the destination to copy to is the second. Since the file is fetched over the network, one or more of the file arguments takes the form . Be aware when copying directories recursively that scp uses , whereas cp uses . To open an interactive session for copying files, use sftp. Refer to &man.sftp.1; for a list of available commands while in an sftp session. Key-based Authentication Instead of using passwords, a client can be configured to connect to the remote machine using keys. To generate RSA authentication keys, use ssh-keygen. To generate a public and private key pair, specify the type of key and follow the prompts. It is recommended to protect the keys with a memorable, but hard to guess passphrase. &prompt.user; ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:54Xm9Uvtv6H4NOo6yjP/YCfODryvUU7yWHzMqeXwhq8 user@host.example.com The key's randomart image is: +---[RSA 2048]----+ | | | | | | | . o.. | | .S*+*o | | . O=Oo . . | | = Oo= oo..| | .oB.* +.oo.| | =OE**.o..=| +----[SHA256]-----+ Type a passphrase here. It can contain spaces and symbols. Retype the passphrase to verify it. The private key is stored in ~/.ssh/id_rsa and the public key is stored in ~/.ssh/id_rsa.pub. The public key must be copied to ~/.ssh/authorized_keys on the remote machine for key-based authentication to work. Many users believe that keys are secure by design and will use a key without a passphrase. This is dangerous behavior. An administrator can verify that a key pair is protected by a passphrase by viewing the private key manually. If the private key file contains the word ENCRYPTED, the key owner is using a passphrase. In addition, to better secure end users, from may be placed in the public key file. For example, adding from="192.168.10.5" in front of the ssh-rsa prefix will only allow that specific user to log in from that IP address. The options and files vary with different versions of OpenSSH. To avoid problems, consult &man.ssh-keygen.1;. If a passphrase is used, the user is prompted for the passphrase each time a connection is made to the server. To load SSH keys into memory and remove the need to type the passphrase each time, use &man.ssh-agent.1; and &man.ssh-add.1;. Authentication is handled by ssh-agent, using the private keys that are loaded into it. ssh-agent can be used to launch another application like a shell or a window manager. To use ssh-agent in a shell, start it with a shell as an argument. Add the identity by running ssh-add and entering the passphrase for the private key. The user will then be able to ssh to any host that has the corresponding public key installed. For example: &prompt.user; ssh-agent csh &prompt.user; ssh-add Enter passphrase for key '/usr/home/user/.ssh/id_rsa': Identity added: /usr/home/user/.ssh/id_rsa (/usr/home/user/.ssh/id_rsa) &prompt.user; Enter the passphrase for the key. To use ssh-agent in &xorg;, add an entry for it in ~/.xinitrc. This provides the ssh-agent services to all programs launched in &xorg;. An example ~/.xinitrc might look like this: exec ssh-agent startxfce4 This launches ssh-agent, which in turn launches XFCE, every time &xorg; starts. Once &xorg; has been restarted so that the changes can take effect, run ssh-add to load all of the SSH keys. <acronym>SSH</acronym> Tunneling OpenSSH tunneling OpenSSH has the ability to create a tunnel to encapsulate another protocol in an encrypted session. The following command tells ssh to create a tunnel for telnet: &prompt.user; ssh -2 -N -f -L 5023:localhost:23 user@foo.example.com &prompt.user; This example uses the following options: Forces ssh to use version 2 to connect to the server. Indicates no command, or tunnel only. If omitted, ssh initiates a normal session. Forces ssh to run in the background. Indicates a local tunnel in localport:remotehost:remoteport format. The login name to use on the specified remote SSH server. An SSH tunnel works by creating a listen socket on localhost on the specified localport. It then forwards any connections received on localport via the SSH connection to the specified remotehost:remoteport. In the example, port 5023 on the client is forwarded to port 23 on the remote machine. Since port 23 is used by telnet, this creates an encrypted telnet session through an SSH tunnel. This method can be used to wrap any number of insecure TCP protocols such as SMTP, POP3, and FTP, as seen in the following examples. Create a Secure Tunnel for <acronym>SMTP</acronym> &prompt.user; ssh -2 -N -f -L 5025:localhost:25 user@mailserver.example.com user@mailserver.example.com's password: ***** &prompt.user; telnet localhost 5025 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 mailserver.example.com ESMTP This can be used in conjunction with ssh-keygen and additional user accounts to create a more seamless SSH tunneling environment. Keys can be used in place of typing a password, and the tunnels can be run as a separate user. Secure Access of a <acronym>POP3</acronym> Server In this example, there is an SSH server that accepts connections from the outside. On the same network resides a mail server running a POP3 server. To check email in a secure manner, create an SSH connection to the SSH server and tunnel through to the mail server: &prompt.user; ssh -2 -N -f -L 2110:mail.example.com:110 user@ssh-server.example.com user@ssh-server.example.com's password: ****** Once the tunnel is up and running, point the email client to send POP3 requests to localhost on port 2110. This connection will be forwarded securely across the tunnel to mail.example.com. Bypassing a Firewall Some firewalls filter both incoming and outgoing connections. For example, a firewall might limit access from remote machines to ports 22 and 80 to only allow SSH and web surfing. This prevents access to any other service which uses a port other than 22 or 80. The solution is to create an SSH connection to a machine outside of the network's firewall and use it to tunnel to the desired service: &prompt.user; ssh -2 -N -f -L 8888:music.example.com:8000 user@unfirewalled-system.example.org user@unfirewalled-system.example.org's password: ******* In this example, a streaming Ogg Vorbis client can now be pointed to localhost port 8888, which will be forwarded over to music.example.com on port 8000, successfully bypassing the firewall. Enabling the SSH Server OpenSSH enabling In addition to providing built-in SSH client utilities, a &os; system can be configured as an SSH server, accepting connections from other SSH clients. To see if sshd is operating, use the &man.service.8; command: &prompt.root; service sshd status If the service is not running, add the following line to /etc/rc.conf. sshd_enable="YES" This will start sshd, the daemon program for OpenSSH, the next time the system boots. To start it now: &prompt.root; service sshd start The first time sshd starts on a &os; system, the system's host keys will be automatically created and the fingerprint will be displayed on the console. Provide users with the fingerprint so that they can verify it the first time they connect to the server. Refer to &man.sshd.8; for the list of available options when starting sshd and a more complete discussion about authentication, the login process, and the various configuration files. At this point, the sshd should be available to all users with a username and password on the system. SSH Server Security While sshd is the most widely used remote administration facility for &os;, brute force and drive by attacks are common to any system exposed to public networks. Several additional parameters are available to prevent the success of these attacks and will be described in this section. It is a good idea to limit which users can log into the SSH server and from where using the AllowUsers keyword in the OpenSSH server configuration file. For example, to only allow root to log in from 192.168.1.32, add this line to /etc/ssh/sshd_config: AllowUsers root@192.168.1.32 To allow admin to log in from anywhere, list that user without specifying an IP address: AllowUsers admin Multiple users should be listed on the same line, like so: AllowUsers root@192.168.1.32 admin After making changes to /etc/ssh/sshd_config, tell sshd to reload its configuration file by running: &prompt.root; service sshd reload When this keyword is used, it is important to list each user that needs to log into this machine. Any user that is not specified in that line will be locked out. Also, the keywords used in the OpenSSH server configuration file are case-sensitive. If the keyword is not spelled correctly, including its case, it will be ignored. Always test changes to this file to make sure that the edits are working as expected. Refer to &man.sshd.config.5; to verify the spelling and use of the available keywords. In addition, users may be forced to use two factor authentication via the use of a public and private key. When required, the user may generate a key pair through the use of &man.ssh-keygen.1; and send the administrator the public key. This key file will be placed in the authorized_keys as described above in the client section. To force the users to use keys only, the following option may be configured: AuthenticationMethods publickey Do not confuse /etc/ssh/sshd_config with /etc/ssh/ssh_config (note the extra d in the first filename). The first file configures the server and the second file configures the client. Refer to &man.ssh.config.5; for a listing of the available client settings. Access Control Lists TomRhodesContributed by ACL Access Control Lists (ACLs) extend the standard &unix; permission model in a &posix;.1e compatible way. This permits an administrator to take advantage of a more fine-grained permissions model. The &os; GENERIC kernel provides ACL support for UFS file systems. Users who prefer to compile a custom kernel must include the following option in their custom kernel configuration file: options UFS_ACL If this option is not compiled in, a warning message will be displayed when attempting to mount a file system with ACL support. ACLs rely on extended attributes which are natively supported in UFS2. This chapter describes how to enable ACL support and provides some usage examples. Enabling <acronym>ACL</acronym> Support ACLs are enabled by the mount-time administrative flag, , which may be added to /etc/fstab. The mount-time flag can also be automatically set in a persistent manner using &man.tunefs.8; to modify a superblock ACLs flag in the file system header. In general, it is preferred to use the superblock flag for several reasons: The superblock flag cannot be changed by a remount using as it requires a complete umount and fresh mount. This means that ACLs cannot be enabled on the root file system after boot. It also means that ACL support on a file system cannot be changed while the system is in use. Setting the superblock flag causes the file system to always be mounted with ACLs enabled, even if there is not an fstab entry or if the devices re-order. This prevents accidental mounting of the file system without ACL support. It is desirable to discourage accidental mounting without ACLs enabled because nasty things can happen if ACLs are enabled, then disabled, then re-enabled without flushing the extended attributes. In general, once ACLs are enabled on a file system, they should not be disabled, as the resulting file protections may not be compatible with those intended by the users of the system, and re-enabling ACLs may re-attach the previous ACLs to files that have since had their permissions changed, resulting in unpredictable behavior. File systems with ACLs enabled will show a plus (+) sign in their permission settings: drwx------ 2 robert robert 512 Dec 27 11:54 private drwxrwx---+ 2 robert robert 512 Dec 23 10:57 directory1 drwxrwx---+ 2 robert robert 512 Dec 22 10:20 directory2 drwxrwx---+ 2 robert robert 512 Dec 27 11:57 directory3 drwxr-xr-x 2 robert robert 512 Nov 10 11:54 public_html In this example, directory1, directory2, and directory3 are all taking advantage of ACLs, whereas public_html is not. Using <acronym>ACL</acronym>s File system ACLs can be viewed using getfacl. For instance, to view the ACL settings on test: &prompt.user; getfacl test #file:test #owner:1001 #group:1001 user::rw- group::r-- other::r-- To change the ACL settings on this file, use setfacl. To remove all of the currently defined ACLs from a file or file system, include . However, the preferred method is to use as it leaves the basic fields required for ACLs to work. &prompt.user; setfacl -k test To modify the default ACL entries, use : &prompt.user; setfacl -m u:trhodes:rwx,group:web:r--,o::--- test In this example, there were no pre-defined entries, as they were removed by the previous command. This command restores the default options and assigns the options listed. If a user or group is added which does not exist on the system, an Invalid argument error will be displayed. Refer to &man.getfacl.1; and &man.setfacl.1; for more information about the options available for these commands. Monitoring Third Party Security Issues TomRhodesContributed by pkg In recent years, the security world has made many improvements to how vulnerability assessment is handled. The threat of system intrusion increases as third party utilities are installed and configured for virtually any operating system available today. Vulnerability assessment is a key factor in security. While &os; releases advisories for the base system, doing so for every third party utility is beyond the &os; Project's capability. There is a way to mitigate third party vulnerabilities and warn administrators of known security issues. A &os; add on utility known as pkg includes options explicitly for this purpose. pkg polls a database for security issues. The database is updated and maintained by the &os; Security Team and ports developers. Please refer to instructions for installing pkg. Installation provides &man.periodic.8; configuration files for maintaining the pkg audit database, and provides a programmatic method of keeping it updated. This functionality is enabled if daily_status_security_pkgaudit_enable is set to YES in &man.periodic.conf.5;. Ensure that daily security run emails, which are sent to root's email account, are being read. After installation, and to audit third party utilities as part of the Ports Collection at any time, an administrator may choose to update the database and view known vulnerabilities of installed packages by invoking: &prompt.root; pkg audit -F pkg displays messages any published vulnerabilities in installed packages: Affected package: cups-base-1.1.22.0_1 Type of problem: cups-base -- HPGL buffer overflow vulnerability. Reference: <https://www.FreeBSD.org/ports/portaudit/40a3bca2-6809-11d9-a9e7-0001020eed82.html> 1 problem(s) in your installed packages found. You are advised to update or deinstall the affected package(s) immediately. By pointing a web browser to the displayed URL, an administrator may obtain more information about the vulnerability. This will include the versions affected, by &os; port version, along with other web sites which may contain security advisories. pkg is a powerful utility and is extremely useful when coupled with ports-mgmt/portmaster. &os; Security Advisories TomRhodesContributed by &os; Security Advisories Like many producers of quality operating systems, the &os; Project has a security team which is responsible for determining the End-of-Life (EoL) date for each &os; release and to provide security updates for supported releases which have not yet reached their EoL. More information about the &os; security team and the supported releases is available on the &os; security page. One task of the security team is to respond to reported security vulnerabilities in the &os; operating system. Once a vulnerability is confirmed, the security team verifies the steps necessary to fix the vulnerability and updates the source code with the fix. It then publishes the details as a Security Advisory. Security advisories are published on the &os; website and mailed to the &a.security-notifications.name;, &a.security.name;, and &a.announce.name; mailing lists. This section describes the format of a &os; security advisory. Format of a Security Advisory Here is an example of a &os; security advisory: ============================================================================= -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================================================= FreeBSD-SA-14:04.bind Security Advisory The FreeBSD Project Topic: BIND remote denial of service vulnerability Category: contrib Module: bind Announced: 2014-01-14 Credits: ISC Affects: FreeBSD 8.x and FreeBSD 9.x Corrected: 2014-01-14 19:38:37 UTC (stable/9, 9.2-STABLE) 2014-01-14 19:42:28 UTC (releng/9.2, 9.2-RELEASE-p3) 2014-01-14 19:42:28 UTC (releng/9.1, 9.1-RELEASE-p10) 2014-01-14 19:38:37 UTC (stable/8, 8.4-STABLE) 2014-01-14 19:42:28 UTC (releng/8.4, 8.4-RELEASE-p7) 2014-01-14 19:42:28 UTC (releng/8.3, 8.3-RELEASE-p14) CVE Name: CVE-2014-0591 For general information regarding FreeBSD Security Advisories, including descriptions of the fields above, security branches, and the following sections, please visit <URL:http://security.FreeBSD.org/>. I. Background BIND 9 is an implementation of the Domain Name System (DNS) protocols. The named(8) daemon is an Internet Domain Name Server. II. Problem Description Because of a defect in handling queries for NSEC3-signed zones, BIND can crash with an "INSIST" failure in name.c when processing queries possessing certain properties. This issue only affects authoritative nameservers with at least one NSEC3-signed zone. Recursive-only servers are not at risk. III. Impact An attacker who can send a specially crafted query could cause named(8) to crash, resulting in a denial of service. IV. Workaround No workaround is available, but systems not running authoritative DNS service with at least one NSEC3-signed zone using named(8) are not vulnerable. V. Solution Perform one of the following: 1) Upgrade your vulnerable system to a supported FreeBSD stable or release / security branch (releng) dated after the correction date. 2) To update your vulnerable system via a source code patch: The following patches have been verified to apply to the applicable FreeBSD release branches. a) Download the relevant patch from the location below, and verify the detached PGP signature using your PGP utility. [FreeBSD 8.3, 8.4, 9.1, 9.2-RELEASE and 8.4-STABLE] # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-release.patch.asc # gpg --verify bind-release.patch.asc [FreeBSD 9.2-STABLE] # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch # fetch http://security.FreeBSD.org/patches/SA-14:04/bind-stable-9.patch.asc # gpg --verify bind-stable-9.patch.asc b) Execute the following commands as root: # cd /usr/src # patch < /path/to/patch Recompile the operating system using buildworld and installworld as described in <URL:https://www.FreeBSD.org/handbook/makeworld.html>. Restart the applicable daemons, or reboot the system. 3) To update your vulnerable system via a binary patch: Systems running a RELEASE version of FreeBSD on the i386 or amd64 platforms can be updated via the freebsd-update(8) utility: # freebsd-update fetch # freebsd-update install VI. Correction details The following list contains the correction revision numbers for each affected branch. Branch/path Revision - ------------------------------------------------------------------------- stable/8/ r260646 releng/8.3/ r260647 releng/8.4/ r260647 stable/9/ r260646 releng/9.1/ r260647 releng/9.2/ r260647 - ------------------------------------------------------------------------- To see which files were modified by a particular revision, run the following command, replacing NNNNNN with the revision number, on a machine with Subversion installed: # svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base Or visit the following URL, replacing NNNNNN with the revision number: <URL:https://svnweb.freebsd.org/base?view=revision&revision=NNNNNN> VII. References <URL:https://kb.isc.org/article/AA-01078> <URL:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0591> The latest revision of this advisory is available at <URL:http://security.FreeBSD.org/advisories/FreeBSD-SA-14:04.bind.asc> -----BEGIN PGP SIGNATURE----- iQIcBAEBCgAGBQJS1ZTYAAoJEO1n7NZdz2rnOvQP/2/68/s9Cu35PmqNtSZVVxVG ZSQP5EGWx/lramNf9566iKxOrLRMq/h3XWcC4goVd+gZFrvITJSVOWSa7ntDQ7TO XcinfRZ/iyiJbs/Rg2wLHc/t5oVSyeouyccqODYFbOwOlk35JjOTMUG1YcX+Zasg ax8RV+7Zt1QSBkMlOz/myBLXUjlTZ3Xg2FXVsfFQW5/g2CjuHpRSFx1bVNX6ysoG 9DT58EQcYxIS8WfkHRbbXKh9I1nSfZ7/Hky/kTafRdRMrjAgbqFgHkYTYsBZeav5 fYWKGQRJulYfeZQ90yMTvlpF42DjCC3uJYamJnwDIu8OhS1WRBI8fQfr9DRzmRua OK3BK9hUiScDZOJB6OqeVzUTfe7MAA4/UwrDtTYQ+PqAenv1PK8DZqwXyxA9ThHb zKO3OwuKOVHJnKvpOcr+eNwo7jbnHlis0oBksj/mrq2P9m2ueF9gzCiq5Ri5Syag Wssb1HUoMGwqU0roS8+pRpNC8YgsWpsttvUWSZ8u6Vj/FLeHpiV3mYXPVMaKRhVm 067BA2uj4Th1JKtGleox+Em0R7OFbCc/9aWC67wiqI6KRyit9pYiF3npph+7D5Eq 7zPsUdDd+qc+UTiLp3liCRp5w6484wWdhZO6wRtmUgxGjNkxFoNnX8CitzF8AaqO UWWemqWuz3lAZuORQ9KX =OQzQ -----END PGP SIGNATURE----- Every security advisory uses the following format: Each security advisory is signed by the PGP key of the Security Officer. The public key for the Security Officer can be verified at . The name of the security advisory always begins with FreeBSD-SA- (for FreeBSD Security Advisory), followed by the year in two digit format (14:), followed by the advisory number for that year (04.), followed by the name of the affected application or subsystem (bind). The advisory shown here is the fourth advisory for 2014 and it affects BIND. The Topic field summarizes the vulnerability. The Category refers to the affected part of the system which may be one of core, contrib, or ports. The core category means that the vulnerability affects a core component of the &os; operating system. The contrib category means that the vulnerability affects software included with &os;, such as BIND. The ports category indicates that the vulnerability affects software available through the Ports Collection. The Module field refers to the component location. In this example, the bind module is affected; therefore, this vulnerability affects an application installed with the operating system. The Announced field reflects the date the security advisory was published. This means that the security team has verified that the problem exists and that a patch has been committed to the &os; source code repository. The Credits field gives credit to the individual or organization who noticed the vulnerability and reported it. The Affects field explains which releases of &os; are affected by this vulnerability. The Corrected field indicates the date, time, time offset, and releases that were corrected. The section in parentheses shows each branch for which the fix has been merged, and the version number of the corresponding release from that branch. The release identifier itself includes the version number and, if appropriate, the patch level. The patch level is the letter p followed by a number, indicating the sequence number of the patch, allowing users to track which patches have already been applied to the system. The CVE Name field lists the advisory number, if one exists, in the public cve.mitre.org security vulnerabilities database. The Background field provides a description of the affected module. The Problem Description field explains the vulnerability. This can include information about the flawed code and how the utility could be maliciously used. The Impact field describes what type of impact the problem could have on a system. The Workaround field indicates if a workaround is available to system administrators who cannot immediately patch the system . The Solution field provides the instructions for patching the affected system. This is a step by step tested and verified method for getting a system patched and working securely. The Correction Details field displays each affected Subversion branch with the revision number that contains the corrected code. The References field offers sources of additional information regarding the vulnerability. Process Accounting TomRhodesContributed by Process Accounting Process accounting is a security method in which an administrator may keep track of system resources used and their allocation among users, provide for system monitoring, and minimally track a user's commands. Process accounting has both positive and negative points. One of the positives is that an intrusion may be narrowed down to the point of entry. A negative is the amount of logs generated by process accounting, and the disk space they may require. This section walks an administrator through the basics of process accounting. If more fine-grained accounting is needed, refer to . Enabling and Utilizing Process Accounting Before using process accounting, it must be enabled using the following commands: &prompt.root; touch /var/account/acct &prompt.root; chmod 600 /var/account/acct &prompt.root; accton /var/account/acct &prompt.root; sysrc accounting_enable=yes Once enabled, accounting will begin to track information such as CPU statistics and executed commands. All accounting logs are in a non-human readable format which can be viewed using sa. If issued without any options, sa prints information relating to the number of per-user calls, the total elapsed time in minutes, total CPU and user time in minutes, and the average number of I/O operations. Refer to &man.sa.8; for the list of available options which control the output. To display the commands issued by users, use lastcomm. For example, this command prints out all usage of ls by trhodes on the ttyp1 terminal: &prompt.root; lastcomm ls trhodes ttyp1 Many other useful options exist and are explained in &man.lastcomm.1;, &man.acct.5;, and &man.sa.8;. Resource Limits TomRhodesContributed by Resource limits &os; provides several methods for an administrator to limit the amount of system resources an individual may use. Disk quotas limit the amount of disk space available to users. Quotas are discussed in . quotas limiting users quotas disk quotas Limits to other resources, such as CPU and memory, can be set using either a flat file or a command to configure a resource limits database. The traditional method defines login classes by editing /etc/login.conf. While this method is still supported, any changes require a multi-step process of editing this file, rebuilding the resource database, making necessary changes to /etc/master.passwd, and rebuilding the password database. This can become time consuming, depending upon the number of users to configure. rctl can be used to provide a more fine-grained method for controlling resource limits. This command supports more than user limits as it can also be used to set resource constraints on processes and jails. This section demonstrates both methods for controlling resources, beginning with the traditional method. Configuring Login Classes limiting users accounts limiting /etc/login.conf In the traditional method, login classes and the resource limits to apply to a login class are defined in /etc/login.conf. Each user account can be assigned to a login class, where default is the default login class. Each login class has a set of login capabilities associated with it. A login capability is a name=value pair, where name is a well-known identifier and value is an arbitrary string which is processed accordingly depending on the name. Whenever /etc/login.conf is edited, the /etc/login.conf.db must be updated by executing the following command: &prompt.root; cap_mkdb /etc/login.conf Resource limits differ from the default login capabilities in two ways. First, for every limit, there is a soft and hard limit. A soft limit may be adjusted by the user or application, but may not be set higher than the hard limit. The hard limit may be lowered by the user, but can only be raised by the superuser. Second, most resource limits apply per process to a specific user. lists the most commonly used resource limits. All of the available resource limits and capabilities are described in detail in &man.login.conf.5;. limiting users coredumpsize limiting users cputime limiting users filesize limiting users maxproc limiting users memorylocked limiting users memoryuse limiting users openfiles limiting users sbsize limiting users stacksize Login Class Resource Limits Resource Limit Description coredumpsize The limit on the size of a core file generated by a program is subordinate to other limits on disk usage, such as filesize or disk quotas. This limit is often used as a less severe method of controlling disk space consumption. Since users do not generate core files and often do not delete them, this setting may save them from running out of disk space should a large program crash. cputime The maximum amount of CPU time a user's process may consume. Offending processes will be killed by the kernel. This is a limit on CPU time consumed, not the percentage of the CPU as displayed in some of the fields generated by top and ps. filesize The maximum size of a file the user may own. Unlike disk quotas (), this limit is enforced on individual files, not the set of all files a user owns. maxproc The maximum number of foreground and background processes a user can run. This limit may not be larger than the system limit specified by kern.maxproc. Setting this limit too small may hinder a user's productivity as some tasks, such as compiling a large program, start lots of processes. memorylocked The maximum amount of memory a process may request to be locked into main memory using &man.mlock.2;. Some system-critical programs, such as &man.amd.8;, lock into main memory so that if the system begins to swap, they do not contribute to disk thrashing. memoryuse The maximum amount of memory a process may consume at any given time. It includes both core memory and swap usage. This is not a catch-all limit for restricting memory consumption, but is a good start. openfiles The maximum number of files a process may have open. In &os;, files are used to represent sockets and IPC channels, so be careful not to set this too low. The system-wide limit for this is defined by kern.maxfiles. sbsize The limit on the amount of network memory a user may consume. This can be generally used to limit network communications. stacksize The maximum size of a process stack. This alone is not sufficient to limit the amount of memory a program may use, so it should be used in conjunction with other limits.
There are a few other things to remember when setting resource limits: Processes started at system startup by /etc/rc are assigned to the daemon login class. Although the default /etc/login.conf is a good source of reasonable values for most limits, they may not be appropriate for every system. Setting a limit too high may open the system up to abuse, while setting it too low may put a strain on productivity. &xorg; takes a lot of resources and encourages users to run more programs simultaneously. Many limits apply to individual processes, not the user as a whole. For example, setting openfiles to 50 means that each process the user runs may open up to 50 files. The total amount of files a user may open is the value of openfiles multiplied by the value of maxproc. This also applies to memory consumption. For further information on resource limits and login classes and capabilities in general, refer to &man.cap.mkdb.1;, &man.getrlimit.2;, and &man.login.conf.5;.
Enabling and Configuring Resource Limits The kern.racct.enable tunable must be set to a non-zero value. Custom kernels require specific configuration: options RACCT options RCTL Once the system has rebooted into the new kernel, rctl may be used to set rules for the system. Rule syntax is controlled through the use of a subject, subject-id, resource, and action, as seen in this example rule: user:trhodes:maxproc:deny=10/user In this rule, the subject is user, the subject-id is trhodes, the resource, maxproc, is the maximum number of processes, and the action is deny, which blocks any new processes from being created. This means that the user, trhodes, will be constrained to no greater than 10 processes. Other possible actions include logging to the console, passing a notification to &man.devd.8;, or sending a sigterm to the process. Some care must be taken when adding rules. Since this user is constrained to 10 processes, this example will prevent the user from performing other tasks after logging in and executing a screen session. Once a resource limit has been hit, an error will be printed, as in this example: &prompt.user; man test /usr/bin/man: Cannot fork: Resource temporarily unavailable eval: Cannot fork: Resource temporarily unavailable As another example, a jail can be prevented from exceeding a memory limit. This rule could be written as: &prompt.root; rctl -a jail:httpd:memoryuse:deny=2G/jail Rules will persist across reboots if they have been added to /etc/rctl.conf. The format is a rule, without the preceding command. For example, the previous rule could be added as: # Block jail from using more than 2G memory: jail:httpd:memoryuse:deny=2G/jail To remove a rule, use rctl to remove it from the list: &prompt.root; rctl -r user:trhodes:maxproc:deny=10/user A method for removing all rules is documented in &man.rctl.8;. However, if removing all rules for a single user is required, this command may be issued: &prompt.root; rctl -r user:trhodes Many other resources exist which can be used to exert additional control over various subjects. See &man.rctl.8; to learn about them.
Shared Administration with Sudo TomRhodesContributed by Security Sudo System administrators often need the ability to grant enhanced permissions to users so they may perform privileged tasks. The idea that team members are provided access to a &os; system to perform their specific tasks opens up unique challenges to every administrator. These team members only need a subset of access beyond normal end user levels; however, they almost always tell management they are unable to perform their tasks without superuser access. Thankfully, there is no reason to provide such access to end users because tools exist to manage this exact requirement. Up to this point, the security chapter has covered permitting access to authorized users and attempting to prevent unauthorized access. Another problem arises once authorized users have access to the system resources. In many cases, some users may need access to application startup scripts, or a team of administrators need to maintain the system. Traditionally, the standard users and groups, file permissions, and even the &man.su.1; command would manage this access. And as applications required more access, as more users needed to use system resources, a better solution was required. The most used application is currently Sudo. Sudo allows administrators to configure more rigid access to system commands and provide for some advanced logging features. As a tool, it is available from the Ports Collection as security/sudo or by use of the &man.pkg.8; utility. To use the &man.pkg.8; tool: &prompt.root; pkg install sudo After the installation is complete, the installed visudo will open the configuration file with a text editor. Using visudo is highly recommended as it comes with a built in syntax checker to verify there are no errors before the file is saved. The configuration file is made up of several small sections which allow for extensive configuration. In the following example, web application maintainer, user1, needs to start, stop, and restart the web application known as webservice. To grant this user permission to perform these tasks, add this line to the end of /usr/local/etc/sudoers: user1 ALL=(ALL) /usr/sbin/service webservice * The user may now start webservice using this command: &prompt.user; sudo /usr/sbin/service webservice start While this configuration allows a single user access to the webservice service; however, in most organizations, there is an entire web team in charge of managing the service. A single line can also give access to an entire group. These steps will create a web group, add a user to this group, and allow all members of the group to manage the service: &prompt.root; pw groupadd -g 6001 -n webteam Using the same &man.pw.8; command, the user is added to the webteam group: &prompt.root; pw groupmod -m user1 -n webteam Finally, this line in /usr/local/etc/sudoers allows any member of the webteam group to manage webservice: %webteam ALL=(ALL) /usr/sbin/service webservice * Unlike &man.su.1;, Sudo only requires the end user password. This adds an advantage where users will not need shared passwords, a finding in most security audits and just bad all the way around. Users permitted to run applications with Sudo only enter their own passwords. This is more secure and gives better control than &man.su.1;, where the root password is entered and the user acquires all root permissions. Most organizations are moving or have moved toward a two factor authentication model. In these cases, the user may not have a password to enter. Sudo provides for these cases with the NOPASSWD variable. Adding it to the configuration above will allow all members of the webteam group to manage the service without the password requirement: %webteam ALL=(ALL) NOPASSWD: /usr/sbin/service webservice * Logging Output An advantage to implementing Sudo is the ability to enable session logging. Using the built in log mechanisms and the included sudoreplay command, all commands initiated through Sudo are logged for later verification. To enable this feature, add a default log directory entry, this example uses a user variable. Several other log filename conventions exist, consult the manual page for sudoreplay for additional information. Defaults iolog_dir=/var/log/sudo-io/%{user} This directory will be created automatically after the logging is configured. It is best to let the system create directory with default permissions just to be safe. In addition, this entry will also log administrators who use the sudoreplay command. To change this behavior, read and uncomment the logging options inside sudoers. Once this directive has been added to the sudoers file, any user configuration can be updated with the request to log access. In the example shown, the updated webteam entry would have the following additional changes: %webteam ALL=(ALL) NOPASSWD: LOG_INPUT: LOG_OUTPUT: /usr/sbin/service webservice * From this point on, all webteam members altering the status of the webservice application will be logged. The list of previous and current sessions can be displayed with: &prompt.root; sudoreplay -l In the output, to replay a specific session, search for the TSID= entry, and pass that to sudoreplay with no other options to replay the session at normal speed. For example: &prompt.root; sudoreplay user1/00/00/02 While sessions are logged, any administrator is able to remove sessions and leave only a question of why they had done so. It is worthwhile to add a daily check through an intrusion detection system (IDS) or similar software so that other administrators are alerted to manual alterations. The sudoreplay is extremely extendable. Consult the documentation for more information.