At first glance of a UNIX or UNIX-like operating system, most users can become quite intimidated. Though appearing convoluted and even cumbersome at times, the concepts of UNIX are actually based on simplicity in design structure. The scope of this document and its accompanying workshop is designed to explore this design and help to better understand the general framework of the Operating System we know today as GNU/Linux.
The Linux kernel started as a patchwork and hobby project with a goal to bring a free UNIX-like kernel to an ordinary Intel-based personal computer. Before the concept of the Linux kernel, most UNIX operating Systems were confined to operation on proprietary and large scale servers such as DEC Alpha Server or Sun Microsystems. The few UNIX distributions that were available for X86 platforms were expensive and cumbersome.
Initially designed by Linus Torvalds with the help of many other Open Source programmers and advocates, the Linux kernel has flourished with both OSS communities and corporate software distributors alike supporting its continued developement.
In its growing popularity in commerical distributions such as Red Hat and Suse, it has become an increasing misnomer to refer to the distribution as Linux itself. One might freely exchange the term Linux 9 with Red Hat or vice versa. It is important to understand that Linux refers to one single item: the kernel. Linux is simply a kernel; a resource of code compiled as a single file in charge of memory management, input and output requests, and process scheduling for time-shared operations. Essentially, the kernel handles all processing and interaction with the rest of the Operating System. The reason for the distinction is understanding the difference between the Linux kernel version and the version of a specific software distribution. (the Linux 2.4 kernel is not the same as the Debian Linux 2.4 distribution)
Think of the layout of a UNIX directory structure as a tree. Every tree of course has a root, and in this case the root of the Operating System is simply / (slash) commonly referred to as the root. Every file and directory exist under /. The following figure demonstrates a common UNIX heirarchy.
A simple breakdown for the contents of these directories follows:
Unlike Windows or DOS variants, UNIX systems use no drive letters in their hierarchy. There is simply a root with extending directory structures. These structures can exist on any number of physical AND/OR logical disks. Because of the lack of drive lettering, there are no potential limits in the hierarchy of the file system itself. This allows the UNIX hierarchy to be modular as well as extensible.
For example: The /usr directory could exist on the second partition of the first hard drive in the computer, while the /var directory could exist on a single partition of the second hard drive. The partitions and drives themselves are simply mounted in these directories, thus making the directories into mount-points.
Besides using different drivers and partitions in various directories, most UNIX systems also can use multiple file systems. The root drive of the system might use the Extension 3 file system (Ext3fs) because it is the native file system for that particular distribution, while the /usr directory might be mounted using Sun Microsystem's Network File System (NFS), which exists on a remote server. Also, the administrator of the computer might create a custom directory called /windows and mount a remote Windows share using Microsoft's Common Internet File System (commonly called CIFS or SMBFS).
A quick check of the file systems present and mounted on a UNIX system can quickly be checked by running the command "df" which displays the following results.
/ home bin sbin usr etc var root
(df: displays information about the amount of free disk space on a specific volume) The single hard drive in the computer (hde) is partitioned into five partitions (three Primary, and an extended partition with other partitions contained within it) and mounted in various points of the hierarchy.
Because three Primary partitions are already defined, /dev/hde4 is labeled as an Extended partition and subsequent partitions are mounted as file systems within the defined Extended Partition.
As we saw earlier, the UNIX system is merely a tree of files and directories. Within this structure we can have any number of drives mounted at any point in the UNIX hierarchy. Within each drives UNIX also supports different types of file systems.
Using the fdisk command we learned earlier, use the l option (list file system types) to see what types of file systems Linux supports. As we can see, there are a multitude of different file systems from different types of Operating Systems.
For our convenience, we will only deal with a few of these file systems:
These file systems are native to the Linux kernel and are fully supported and documented. Most distributions of Linux ship with these being the default file systems used during installation. These file systems are used in conjunction with locally mounted hard disks. Let us take a brief look at these file systems.
Ext2 was originally developed in the early 1990s as a revision of the Minix file system. As the PC became a more viable alternative to large commercially based UNIX servers, an open file system was needed for the cheaper and larger capacity drives available for the PC. Ext2 became this solution and remained the standard for Linux for over 10 years.
Ext 3 is a revision of the Ext2 file system that includes journaling support. Journaling results in massively reduced time spent recovering a file system after a crash, and is therefore in high demand in environments where high availability is important, not only to improve recovery times on single machines but also to allow a crashed machine's file system to be recovered on another machine when we have a cluster of nodes with a shared disk.
Reiser was originally developed in the late 1990s to provide a fast, stable and journaled file system alternative. The philosophy of ReiserFS according to its developer Hans Reiser is 'The best file systems are those that help create a single shared environment, or namespace, where applications can interact more directly, efficiently and powerfully'. Reiser has gained popularity among independent distributors of Linux such as Gentoo and Lindows.
Other commonly used file systems that are mountable from a Linux system include:
The File Allocation Table System is one of the most common and well known file systems. Originally developed my Microsoft, this file system has undergone many changes. From the size of its allocation table to the capacity limitations, this file system (and its variations) has been used on Microsoft Operating Systems since the advent of DOS.
Also developed by Microsoft, the NT File System was originally designed as a secure file system for Windows Advanced Server (later to become knows as Windows NT). The major advantages of NTFS over its latter counterpart FAT, was of course its increased capacity and security rights management. NTFS is still used in Windows products today such as Windows 2000 and XP.
Not to be confused with Microsoft's NT file system, NFS is not a locally mounted file system at all. Developed by Sun Microsystems in the early 1980's, NFS and related services are used to mount file systems from a remote location to a local computer. These services became the industry standard for accessing "shared drives" on UNIX computers.
The Common Internet File System became Microsoft's answer to NFS for the Windows environment. This file system (much like NFS) mounts remote file systems to a local computer, usually seen as a 'mapped drive'. Users can easily 'share' folders for others to access within their network. SMB/CIFS is still used today on Microsoft operating systems.
The file systems listed are some of the most commonly used today. Most environments use one, if not most of these systems concurrently in daily operations. Understanding the basic workings of these file systems is essential in understanding the structure of a UNIX system.
There are many more file systems supported under Linux systems that are beyond the scope of this document. For a complete list of these file systems please consult the manual pages of the fdisk and mount commands, or visit the following URLs:
One of the most intimidating features of a UNIX system for novice users seems to be shell commands. Though sometimes seemingly cryptic and cumbersome, there is an actual method to the madness of UNIX commands. UNIX commands, like the hierarchy itself, have structure. The basic structure of a Unix command is:
command [options,...] [arguments,...]
command is obviously the name of the command. These are typically 2 or 3 characters long and are often very cryptic in their meaning.
Options (or sometimes called flags) are generally single characters which in some way modify the action of the command. There may be no options or there may be several acting on the same command. Options are preceded by a hyphen () character, but there is no consistent rule among Unix commands as to how options should be grouped. Some commands allow a list of options with just a single hyphen at the beginning of the list. Other commands require that each option is introduced by its own hyphen.
Some options allow a value, often a filename, to be given following the option. There is no consistent manner in which this is allowed, with some options requiring the value to be placed immediately following the option letter, while others expecting a space between the option letter and the value. arguments are values or items upon which the command is to operate. These are often filenames, and depending on the command there may be none or several arguments.
Unix is case-sensitive throughout. The exact combination of upper and lower case letters used in a command, option or filename being important. For example, the options -p and -P in the same command will have different meanings.
Some example commands with options and arguments:
user@host $ ls -al /dev user@host $ ln -s /usr/src/linux2-4 /usr/src/linux user@host $ mv /boot/bzImage /boot/bzImage.old user@host $ chsh –s /bin/bash username
user@host $ man mount
As we read in the introduction of this document. Linux is simply a kernel. It not an a set of commands and utilities that form what we know as an Operating System, it is just the kernel. This kernel has quite a few pieces in the structure of the rest of the operating system. The first piece is the compiled kernel. This is a single file that exists in a bootable directory appropriately called /boot.
At first glance in this directory we see a few files:
Though this directory might seem a bit convoluted, there are a couple files that we should look at more closely.
Note the existence of other kernel files:
GNU/Linux (like many other UNIX systems) has the ability to boot between multiple kernels. This ability helps in troubleshooting and testing custom or upgraded kernels. Booting between multiple kernels is controlled by a system bootloader. The two most common bootloaders distributed for Linux are LILO and GRUB. Though both perform the same basic functionality, there are slight differences in the configuration and implementation for both.
The Linux Loader, commonly known as LILO, has been the most widely adopted loader for GNU/Linux systems since its inception in the 1990s. Configuration for LILO may have a couple of caveats, but once overcome it can become fairly easy to manage. The configuration file for LILO resides in the /etc/lilo.conf. Here is a sample lilo.conf.
boot=/dev/hda map=/boot/map prompt timeout=50 lba32 default=linux # Use something like the following 4 lines for the kernel image=/boot/vmlinuz-2.4.18-14 label=linux read-only root=/dev/hda3 # Use something like the following 4 lines for the custom kernel image=/boot/bzImage.custom label=linux read-only root=/dev/hda3 # For dual booting another OS other=/dev/hda1 label=Windows
hard disk on the first IDE controller
prompt at bootup. While it is not recommended that you remove the prompt line, if you do remove it, you can still get a prompt by holding down the [Shift] key while your machine starts to boot.
for user input before proceeding with booting the default line entry. This is measured in tenths of a second, with 50 as the default.
common entry here is linear. You should not change this line unless you are very aware of what you are doing. Otherwise, you could put your system in an unbootable state.
LILO to boot from the options listed below this line. The name linux refers to the label line below in each of the boot options.
with this particular boot option
screen. In this case, it is also the name referred to by the default line.
below) is read-only and cannot be altered during the boot process.
the root partition
Once the lilo.conf file has been edited, it is time to run LILO
user@host $ /sbin/lilo
This will write the configuration from lilo.conf to the master boot record of the operating system. Once LILO has been run, simply reboot the system to boot from your desired kernel.
Another bootloader available in the Linux community that is gaining popularity. The Grand Unified Bootloader, or GRUB, offers a couple of options not available to LILO, such as customized boot backgrounds and pre boot configuration. The configuration for the GRUB loader resides in /boot/grub/grub.conf. Let us take a look at an example grub.conf.
default 0 timeout 30 splashimage=(hd0,0)/boot/grub/splash.xpm.gz # If you compiled your own kernel, use something like this: title=My Custom Linux root (hd0,0) kernel (hd0,0)/boot/bzImage root=/dev/hda3 # If you're using a default kernel, use something like this instead: title=My Default Linux root (hd0,0) kernel (hd0,0)/boot/vmlinuz-2.4.18-14 root=/dev/hda3 # Below needed only for people who dual-boot another OS title=Windows root (hd0,5)
Multiple Linux kernels from which to boot can also incur having multiple kernel modules. Kernel modules are drivers and services that can be dynamically loaded or unloaded on demand. This ability dynamically load and unload kernel modules has given the Linux kernel much of its popularity and diverseness in operation. When loaded, Linux Kernel Modules (LKMs) are very much part of the kernel. The correct term for the part of the kernel that is bound into the image that you boot, (all of the kernel except the LKMs) is known as the "base kernel." LKMs provide direct communication of drivers or services with the base kernel.
Let use say for a moment that we have two different versions of the Linux kernel compiled and installed on our system: kernel-2.4.18 and kernel-2.4.18custom. We would also find a directory of corresponding modules for the different kernels installed on the system. These modules would be located under /lib/modules in a directory labeled by the respective kernel versions. Let us take a look at an example:
Notice there are two directories corresponding to each kernel installed, 2.4.18-14 and 2.4.18-14custom. Each of these directories contains different sets of modules, distinguishing them from each other. As mentioned earlier, the Linux kernel allows us to dynamically load and unload modules. This ability allows us to troubleshoot drivers and services without having to reboot the system. Let us take a look for a moment at a Linux kernel and its currently loaded modules.
The command lsmod lists the modules currently loaded by the running kernel. As we can see, there are a few services such as ip_tables and iptable_filter loaded. There are also many device drivers loaded: input, hid, usbcore, 3c59x, and others. Each of these modules loaded show their current running size and state, (though it is beyond the scope of this document to break down each module and its purpose) and are loaded via a configuration file or by a service called KUDZU.
We can also manually load and unload modules to test their functionality. For example, we can unload the 3c59x network card module by running the following command:
user@host $ rmmod 3c59x
Alternately we can load the 3c59x module with the following command:
user@host $ modprobe 3c59x
or with the similar command:
user@host $ insmod 3c59x
Note then when using the insmod command that the output reflects the path from which the module was loaded. Take a look at the following example:
In the example above, we inserted the mouse device module and it was loaded from the directory /lib/modules/2.4.18-14/kernel/drivers/input. Even though most Linux distributions ship with services that auto-detect devices and load modules, it is important to be able to manually load and unload modules as well as recognize the path from where the modules load. These services can be very convenient in getting hardware functioning, but in some cases kernel modules can exhibit sporadic behavior, load with incorrect options, and in some cases the wrong module can be loaded all together.
Most distributions of popular GNU/Linux ship with service that auto-detects modules called KUDZU. When started, KUDZU detects the current hardware, and checks it against a database stored in /etc/sysconfig/hwconf. It then determines if any hardware has been added or removed from the system. If so, it prompts the user to configure any added hardware, and unconfigure any removed hardware. It then updates the database in /etc/sysconfig/hwconf and adds or remove entries accordingly. KUDZU can also be invoked manually by simply running the kudzu command. This is especially handy when installing kernel modules and loading them without having to reboot the system. When invoked, KUDZU prompts the user to enter configuration options for the hardware detected, just like during boot. Once entered, the configuration is stored in /etc/sysconfig/hwconf.
As stated before, KUDZU is very convenient in quickly detecting and configuring kernel modules, but it is not without itd drawbacks. Sometimes it can incorrectly detect modules or not even detect any modules at all. For this reason, it can be very useful in knowing what module is needed, and how to manually load and configure it.
For more information on specific modules and device drivers, consult the documentation for your specific hardware or refer to the Linux Kernel Module Developer's Guide at:
You may be thinking "Why do I need a custom kernel? It works fine as it is." There are three reasons for a recompile. First off, you might have some hardware that is so new that there is no kernel module for the device in your distribution of the Linux kernel. Secondly you may have come across some kind of functionality issue which is fixed in a revised version of the kernel. Lastly, you might just want to streamline the size and functionality of your kernel to support only the hardware you have installed on your system.
There are a couple of ways we can update the kernel, depending on the distribution of Linux. Most common distributions like RedHat or Suse include a package updating system (such as rpm) which allows us to simply installed a precompiled kernel and its sources and then simply modify the bootloader to load the new kernel upon booting.
We can also manually download kernel sources and extract them, then manually configure, compile and install them. The advantage of this method allows us to become intimate with the structure and methodology of the Linux kernel. Kernel sources can be downloaded from http://kernel.org via http or ftp. The version of the kernel wanted will depend of course on what drivers and or services are needed. Different kernel version will obviously have different levels of support for various types of hardware.
Let us suppose we are running kernel version 2.4.18. We can verify what version of the kernel by using the command:
user@host $ uname -a
This command will return the hostname, kernel version, and the current date/time. In this case we would see: Linux elvis 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2004 i686 i686 i386 GNU/Linux
We wish to update the kernel to version 2.4.22 for its improved network card support. We would download the kernel tgz file (compressed archive) from ftp://ftp.kernel.org and place it in the /usr/src directory.
On placed in the source directory, we must extract the contents of this compressed archive so that we can configure and compile the kernel. We can do this by running the following command.
user@host $ tar -xzf linux-2.4.22.tar.gz
This will create a linux-2.4.22 source directory in /usr/src/linux-2.4.22 Once created, we would change to this directory and begin configuration. There are three common ways to configure the Linux kernel: config, menuconfig, and xconfig (these configuration methods are options that are passed to the 'make' method of GCC, the C compiler) For the sake of this document, we will use the menuconfig method. We can do this by running the following command:
user@host $ make menuconfig
This will build programs that run the menu system and then quickly pop up a configuration window. The window menu lets you alter many aspects of kernel configuration.
In the configuration menu, we have the option to compile services and drivers into the kernel, build them as modules, or not even build them at all.
Again, it is beyond the scope of this document to break down and analyze every option for every potential module offered by the kernel. For more information of the descriptions of Linux kernel modules, please visit the following URL:
After you have made any necessary changes to the configuration, choose the 'save configuration' option from the menu and exit the kernel configuration program. Once saved, run the following commands:
user@host $ make dep
This command builds the tree of interdependencies in the kernel sources. These dependencies may have been affected by the options you entered in the configure menu.
After the dependencies have been built, we need to compile the actual kernel itself by running the command:
user@host $ make bzImage
This command builds the kernel and then compresses it into a bzip image file. Some distributions do not support this option so the alternative command would be:
user@host $ make vmlinuz
This command builds the kernel and compresses it into a gzip image file.
After making the kernel, we now need to compile and install any modules we might have configured in the kernel configuration menu program. We would do this by running the following command:
user@host $ make modules && make modules_install
After the modules have been compiled and installed, we would copy the newly compiled kernel to the /boot directory. user@host $ cp /usr/src/linux/arch/i386/boot/bzImage /boot
This will copy the bzImage kernel to the /boot directory and then we can simply edit the bootloader to use the new kernel. (Follow the steps from 'Kernel Boot' section in the beginning of this chapter) Always be sure and leave an option to boot from the previous kernel. This will allow recovery in case the new custom kernel has issues with booting or functionality.
Once the bootloader configuration file has been edited, simply reboot the system (using the reboot command) and choose the newly compiled kernel from the boot menu.
Once the system has been rebooted, we can verify the new kernel's existence by again running the following command:
user@host $ uname -a
The output should show the new kernel version for the system:
Linux elvis 2.4.22 #1 Wed Sep 4 13:56:22 EDT 2004 i686 i686 i386 GNU/Linux
For more information on custom kernel compiling, please consult the documentation for your distribution of GNU/Linux or visit the following url:
Much like any other Operating Systems, a GNU/Linux system has networking support for many different types of layer 3 protocols. These protocols can and usually are configured and compiled into the kernel itself. Some of the protocols supported by the kernel include:
Though the kernel can support many different types of network protocols for communications with many different types of systems, for the purpose of this document we will focus on the most current and commonly used network protocol, TCP/IP v4.
TCP/IP v4 (referred to as just TCP/IP or IP) is often called a network protocol. This is a bit of a misnomer. TCP/IP is actually a suite of protocols and services. Each of these protocols and services play an important role in connectivity and communication between systems. In fact, TCP/IP has become so robust it even includes tools to troubleshoot issues that may arrise with communication between systems. The collection of protocols and services that make up TCP/IP can be divided into three different functions:
This provides basic connectivity between systems. This includes:
Think of this function as serving the role of laying the groundwork or roadmap between systems on a network
This provides applications and services the ports they need to operate
Though there are numerous services and utilities that have been added over the years (nslookup, traceroute, ping), they usually fall into the category as a Service or Data function addition to the TCP/IP suite. Think of this function as serving the role of the actual vehicles that move over the road laid by the Network and Transport Function of TCP/IP
This provides low level control of traffic over the network.
These low level protocols aid in delivering messages for network errors such as congestion and collisions. With this aid they function as tools for troubleshooting potential problems. Think of the Control Function as the traffice signs and lights of the road laid by the Network and Transport Function of TCP/IP.
All the protocols combined also belong to several RFC's (request for comments) that describe how they fit into a theoretical working model of network protocols designed by the International Organization for Standardization. This theoretical model is referred to as the OSI model and is used as a guideline for potential developers designing protocols for communication between systems. A more complete diagram of the functions of TCP/IP and their relation to the OSI (Open Systems Interconnection) model for networking can be found at the following URL:
One of the most trying tasks for a Linux system can be proper network configuration of the installed protocol(s). Not only are there many methods and tools for configuration of networking, but the slightest misconfiguration can wreak havoc on the system and exhibit random behavior that can be very difficult troubleshoot.
Distributions of GNU/Linux such as Red Hat or SUSE ship with GUI tools that aid the user in configurating networking for the system, but understanding how these tools work and what they are doing is imperative to the operation of the system. Sometimes distributors change the name or location of their GUI tools, convoluting the interface for system administrators. Sometimes these tools can fail, even without appearing to exhibit failing behavior.
Configuration of TCP/IP for the interface(s) of a Linux system is the process of editing a couple of files and running a couple of commands to verify the configuration.
Starting with the network interface itself, let us suppose we have a 3com 590 ethernet card installed. We can double check that the driver for the interface is loaded by running the command lsmod to make sure the correct kernel module for the card is loaded. If it is not loaded then we could insmod the kernel module 3c59x. Once loaded, we would bring up the interface with the ifconfig command by issuing the up option for the interface.
user@host $ ifconfig eth0 up
This will bring the interface up with no configuration options, but we verify the interface is indeed active by running the ifconfig command with no options, which might display the following results:
Here we see the presence of two adapters: eth0 and lo. The eth0 adapter is a virtual adapter for ethernet interface 0 linked to the actual driver for the network card, 3c59x. The second adapter, lo, is the loopback adapter for the system.
Now that we have verified the existence of the interface, let us take it down and reconfigure it with the proper options. Simply running the command ifconfig with the down option will do the trick.
user@host $ ifconfig eth0 down
From here we need to setup the basic IP settings for the interface. This includes the IP address for the host, the subnet mask, the network ID, and whether or not the interface gets activated during boot. This information is stored in a file under the directory /etc/sysconfig/network-scripts. The name of the file for the information will be determined by the interface er are configuring. For example, if we are setting up configuration for the eth0 interface then the name of the file would be ifcfg-eth0. Let's configure this file by opening it in the nano editor by running the command:
user@host $ nano /etc/sysconfig/network-scripts/ifcfg-eth0
An example ifcfg-eth0 file might look something like this:
DEVICE=eth0 BOOTPROTO=static IPADDR=188.8.131.52 NETWORK=184.108.40.206 NETMASK=255.255.255.0 BROADCAST=220.127.116.11 GATEWAY=18.104.22.168 ONBOOT=yes
After saving this configuration, we can simply bring the interface back up:
user@host $ ifconfig eth0 up
We now have basic connectivity to the network. We can access any other system on the network by its IP address. But we still need to configure the system to use DNS to resolve IP addresses to names. The file /etc/resolv.conf contains very basic and straightforward information for the systems to use DNS. Heres an example of the file resolv.conf.
Search domain.com nameserver 22.214.171.124 nameserver 126.96.36.199
The first line tells us the system belongs to the domain name of domain.com
The second line tells us the system will use 188.8.131.52 as its primary name server for name resolution.
Optionally, the last line is the secondary name server for the system.
Now the system can resolve other host names on the network. As long as there is a static entry for this computer in DNS, it can resolve itself and other systems can also resolve its name on the network. Note: If the system has the proper IP configuration and still cannot resolve itself, there is most likely no entry for it in DNS. Using tools like nslookup can test name resolution for the specified host (see the Services section of Chapter 3 for nslookup). It is not recommended placing the IP address and host name of the system in the /etc/hosts file because this can actually mask the cause of the resolution issue instead of solving it.
Setting up /etc/sysconfig/network-scripts/ifcfg-eth0 for a dynamically configured host is even simpler. The following illustrates a system configured with a dynamic address in ifcfg-eth0:
DEVICE=eth0 BOOTPROTO=DHCP ONBOOT=yes
As long as there is a functioning DHCP server on the network that is handing out the required information, then simply ifconfig the eth0 interface up and the system is pretty much up and running. We now have basic connectivity as well as name resolution for the /etc/resolv.conf file is automatically populated by the DHCP server.
Even though we can resolve other computers on the network, we still cannot resolve the local system which we configured. This is because there is no hostname for the system in DNS. A quick adjustment of a single file should correct the issue. Simply add the following line to file called /etc/dhclient.conf:
send host-name 'elvis';
Where 'elvis' would actually be replaced by the desired hostname of the system, this file is simply the configuration for a process running on the system called dhclient. This configuration will populate the hostname into DNS, providing the DHCP server forwards your hostname to a DNS server that allows dynamic updates. Some distributions of GNU/Linux ship with clients other than dhclient. Clients such as dhcpcd use different configuration files which require different options. For more information consult the documentation for your distribution of GNU/Linux.
What would a GNU/Linux system be without the support of many different kinds of network services? The Linux kernel is so versatile in fact, that its network and service interoperability is almost unparalleled. No matter what Operating Systems exist on the network, Linux has the potential ability to communicate with that Operating System over a variety of methods. Here we will look at a few key services and how to configure them. These services include:
NFSd : Sun Microsystem's Network File System Server FTPd : File Transfer Protocol Server SMBd : Microsoft's Server Message Block Services
These daemons (when installed) usually create an initialization script. The initialization script allows the services run by the daemon to be started and stopped. These initialization scripts exist the directory /etc/init.d. Let us take a look at an example init.d directory:
There are many services installed and listed here. But just because we see initialization (init) scripts here does not mean that the services are active. The service itself needs to be assigned a run-level. A run-level is the level of functionality in which the operating system is running. The levels range from level 0 to level 6, and each level has a different degree of functionality. The run levels are specified within the file /etc/inittab. The inittab file is the master file in which the init program looks to execute its subsequent services. The very first services it executes are those located in the /etc/rc.d directory tree.
The different run levels are defined as:
0 - Halt 1 - Single user mode 2 - Multi-user, without high level services 3 - Full multi-user mode 4 - Mostly unused 5 - X11 6 - Reboot
Here we see another collection of directories. Each has a number in its name for the corresponding run levels it controls. In each directory we will see a variety of init scripts for services. But as mentioned previously, these files that are located in each of the /etc/rc.d run level directories are links to the actual executable files located in /etc/init.d. The links are named somewhat differently. For example, in /etc/rc2.d/, we see a file named S10Network.
Notice the S10 located in front of the word Network. The S stands for start, or start the service. The 10 is the order in which it will start. For example, we will also see S55sshd in the same directory. The postfix program is typically useless without starting the network first. Therefore, we start the network at 10 and the sshd service at 55.
On the other hand, we will notice files within the /etc/rc.d/rc2.d that start with K. For example, /etc/rc.d/rc2.d/K45named (DNS service) might exist in this directory. K stands for kill instead of start. This will keep the service from starting when the system boots.
There are different reasons for having programs start or stop at different run levels. One reason is for our dependencies. We may need to make sure that one service is started before another. Another is diagnostics. If the machine is acting strangely, it would be nice to shut down certain services without having to boot the entire machine. The classic example is the loss of the root password. If the root password is lost, we can reboot into run level 1 and change it.
After we have located the run-level init script and verified it is set to start upon boot, it is time to actually start the service. We can do this by simply running the init script and passing the start option to it.
user@host $ /etc/init.d/nfs start
We can also stop, and even restart the service by the same method. This is handy when changing configuration options and testing services because we do not have to reboot the system to perform these actions. Now let us take a look at configuring a couple of these services to run on a system.
The NFS daemon basically only needs one configuration file and two services to run. It is a fairly quick and straightfoward process toget this service running. The first thing we need to do is configure a file called /etc/exports. The exports file sets up directory 'shares' that are available for other systems on the network. Here is example of an exports file:
/usr/src *.domain.com(ro,root_squash) /share *.domain.com(rw,no_root_squash)
The first line specifies that the /usr/src directory is being shared to all systems on domain.com with readonly privileges including the root user.
The second line specifies that the /share directory is being shared to all systems on domain.com with read and write privileges for everyone including root.
Once we have configured the exports file, we would simply start the portmap and nfsd service by running the following commands.
user@host $ /etc/init.d/portmap start user@host $ /etc/init.d/nfs start
Now your 'shared' directories are available to the network. We can verify their availablility by running the following command:
user@host $ showmount –e
The showmount command run with the -e option will display the exported file systems available. Now other systems can mount and use the exported filesystems from across the network.
Setting up an FTP server for basic operation can be just as simple, though a little varied. In the usual Open-Source Community fashion, different distributors ship different FTP services. The most common FTP daemons include wu-ftpd and vs-ftpd.
Each of these ship with their own configuration (conf) file, usually located in /etc. Though the syntax of the options might differ from one another, the basic configuration options are the same. Let us take a lookat an example vs-ftpd.conf file.
anonymous_enable=YES #local_enable=YES #write_enable=YES # Activate logging of uploads/downloads. xferlog_enable=YES # If you want, you can arrange for uploaded anonymous files to be owned # by a different # user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # You may override where the log file goes if you like. The default is # shown below. #xferlog_file=/var/log/vsftpd.log # You may change the default value for timing out an idle session. #idle_session_timeout=600 # You may change the default value for timing out a data connection. #data_connection_timeout=120 # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service.
user@host $ /etc/init.d/vsftpd start
Now our FTP service is up and running.
One of the most useful network services available for our Linux system is SMB. SMB, commonly called Samba, allows CIFS (Common Internetworking File System) connectivity with Windows based computers. Users can 'share' files systems on their Linux systems as well as mount remote 'shares' from other Windows systems (or any system running SMB). The SMB daemon, like any other is invoked from /etc/init.d and has a configuration file, usually located in /etc. Let us take a look at an example smb.conf file.
[global] workgroup = INTEL netbios name = ELVIS server string = Red Hat Linux security = SHARE encrypt passwords = No unix password sync = Yes preferred master = No local master = No domain master = No [share] path = /usr/local/share read only = No guest ok = Yes browseable = Yes
The options in this file are again fairly straight-forward. Once we have configured the smb.conf, we can simply start the service in the same manner we have been using:
user@host $ /etc/init.d/smb start
Our Samba server is now up and running and accessible from the network.
User management is one of the main chores of system administration, but unfortunately it can be a very tedious task. Most distributions of course ship with GUI tools that make this task a little less boring, but as usual it is imperitive that we understand how these tools work behind the scenes in case the tools break or we are administering a remote system. User accounts are stored in a specific account database for the system called /etc/passwd. This file contains all the account information for the system. Let us take a look at some example entries from
/etc/passwd. root:x:0:0:root:/root:/bin/bash daemon:x:2:2:daemon:/sbin:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
Note the second entry in /etc/passwd, the daemon account. This account is for use with service accounts. Daemons typically run under ths context. Also note the default shell for this account, /sbin/nologin. This theoretically keeps the account from becoming exploited and using it to gain access to the system. Users of a Unix system typically have startup files that are placed in their home directory and parsed when the user first logs in. On a Linux system, these files are locate in /etc/skel. By default this directory is usually empty. If we wanted startup files to be placed in users' home directories upon creation, we would put those startup files in this directory.
To create accounts, the first task to learn is the adduser command. This command performs the exact function as its name. It simply adds a user to the passwd database.
user@host $ adduser simon
This would add the username of 'simon' with a default UID, GID, home directory and shell to /etc/passwd. At this point the user has no password, therefore no encrypted password entry in /etc/shadow. The next step would be to create a password for the user.
Once we have set a password for the user, their encrypted password will be added into /etc/shadow. The user account can now login in with the newly created account and password.
We can also customize the account further from here by editing the /etc/passwd file and changing some of the options such as the default home directory or default shell. We might even want to give access to the user for specific directories or services.
It seems logical that the next task to learn would be to remove user accounts from the system. Some distributions of GNU/Linux ship with the rmuser command utility. We simply would run the command with the user account anme as an argument:
user@host $ rmuser simon
The last step in removing the account is simply removing the user's home directory. We can change directories to /home (or wherever we created the home directory for the user) and use the rmdir command to remove the user's home directory.
user@host $ rmdir –rf simon
The -rf options forcefully remove the directories recursively under the user's home directory.
Sometimes the task of managing users becomes so overwhelming, it is easier to deal with groups. Managing groups can make the task of assigning rights a bit easier. In Unix based systems, groups are defined by a single file, /etc/group. The /etc/group file contains specific information about the GID numbers assigned in the file /etc/passwd. Let us take a look at a section from a group file of a Linux system:
root:x:0:root bin:x:1:root,bin,daemon daemon:x:2:root,bin,daemon wheel:x:10:root ftp:x:50:
Much like the layout of the passwd file, /etc/group contains entries for the specific group name, the 'x' for the shadowed password, the GID, and here we get to see the members of the group. We can add user accounts to a group simply by appending the account name to a group, seperating the account names with a comma.
Once added to the desired group, users can now be easily assigned rights to services, files, and directories.
Every Unix file and directory has a set of permissions that are associated with it. These permissions tells us everything from the owner of the file to what access is available. These sets of permissions are divided into the mode bits, the UID assignment, and the GID assignment. Lets take a look at some permissions in /etc.
If we break down a single line of what we see, it seems a bit cryptic at first.
drwxr-xr-x root root 4096 Jan 22 2003 skel
We see the mode bits first, the UID owner assignment, the GID owner assignment, the size, the date, and finally the name of the file or directory. Let us take a closer look at on this all means.
The mode bits for a file three sets of three alphabetic representations of the permissions assigned. The three sets of these permissions are for the UID owner, the GID owner, and everyone else. We see these permissions shown as RWX which means read, write, and execute.
If we analyze the line above, we see 'd' means we are looking at a directory, the owner (root) of that directory has read, write, and execute (RWX). The group assigned to that directory (root), has just read and execute (R-X). Everyone else not specically assigned to the directory also has read and execute (R-X).
These permissions display as:
drwxr-xr-x root root
We can change permissions so that the owner, the group, and everyone else have full access (read, write and execute) to the directory. These permissions would like this:
drwxrwxrwx root root
We can also change the permissions so that the owner has full access, and everyone else has no access.
drwx------ root root
To change the mode-bit permissions on a file or directory, we would use the chmod command. This command, used with the correct flags, changes the mode bits for the owner, group and everyone else. Using this command is a bit tricky because we have to specify the permissions in octal bits rather than their alphabetical counter part. Let us take a look at the octal representation of these bits:
______________________________ | Octal | Binary | Permissions | |_______|________|_____________| | 0 | 000 | none | | 1 | 001 | --x | | 2 | 010 | -w- | | 3 | 011 | -wx | | 4 | 100 | r-- | | 5 | 101 | r-x | | 6 | 110 | rw- | | 7 | 111 | rwx | |-------|--------|-------------|
user@host $ chmod 550 /etc/skel
Let us say we want to change the owner for /etc/skel. We would do this by running the command chown. This command simply changes the UID associated with the directory. We can do this by running the following command:
user@host $ chown simon /etc/skel
The same applies for changing the group access to the assoicated directory. In this case, we would run the chgrp command. This command, much like chown, changes the GID for the associated file or directory. Here is an example of how we would use the chgrp command.
user@host $ chgrp wheel /etc/skel
This effectively gives group access to the group account called wheel. This should appear similar to the following:
dr-xr-x--- simon wheel 4096 Jan 22 2003 skel
We now see that the user simon is the owner and has read and execute permissions, there is group access for wheel with read and execute permissions, and everyone else has no permissions. These options can be set for any file and directory in the entire file system. Assigning group access and permissions make it easier for us to manage this file system. We can, essentially, add users to a group, give the group access to a directory, and set what permissions that group has to the directory.
Users must be assigned as owner to change permissions on file and directories. Even user with group access have no effective rights to alter these permissions. The only other user that can supercede the rights of ownership is of course, the root user account.
As stated earlier, most distributions of GNU/Linux ship with GUI tools that allow ther alteration of file system rights for the administrator. Knowing the functionality and intricasies behind the interface can help us in efficiently managing our UNIX file system.
Hardware in a Unix system is referred to as a device. Devices include every imaginable piece of hardware we can out into our system such as hard disks, floppy drives, Cdrom drives, keyboards, and mice. Each of these devices get an active entry in the a directory called /dev. These entries are simply files.
Think of these files as descriptions for the type of device it represents.
On most modern Unix systems, these entries are dynamically created for each piece of hardware found by the kernel if the systems is running devfs (device filesystem). Only the pieces of hardware detected get entries in /dev. This helps in keeping the /dev directory clean and simple, which makes it easy for us to see and troubleshoot hardware on the system.
Some distributions do not come with devfs installed and mounted during boot. Without it running, entries for every supported device is usually created in the /dev directory. Let us take a look at an example /dev directory from a Linux system:
As we can see here there are many entries for many different types of devices. We now need to know what entries represent what types of devices.
Devices like keyboards and mice are of course considered input devices. They are usually represented by device entries such as psaux (PS2 auxillery device), mouse (generic mouse driver), or keyboard (generic keyboard). Input devices such as PS/2 keyboards and mice are usually compiled into the kernel and the entries are created under the proper directories in /dev. Most applications support these devices and there is not a real need to do any other configuration for these devices. Once the system is booted, these devices (unless physically broken) simply work as is.
Other input devices such as USB keyboards and mice sometimes require a bit more configuration to work properly. These devices require other modules loaded or compiled into the kernel so they can function. Once these modules have been loaded, then the proper devices entries are made in the /dev directory. USB input devices need their corresponding host controller modules loaded in order to communicate with the USB bus. Depending on the chipset type of the controller will decide what USB module for the host controller is needed. Let us take a look at the three types of USB interfaces supported by the Linux kernel:
USB-UHCI Support for VIA and Intel-based chipset USB controllers USB-OHCI Support for TI, Compaq, and SiS-based chipset controllers USB-EHCI Support of generic-based chipsets for USB 2.0 controllers
There is also a module for support of USB input devices. This is called the Human Interface Device module, or HID. The name of this module varies in the different distributions, but in most commercialy distributions it is simply referred to as HID.
We can simply load the module for usbcore, load the module for the desired USB host controller, then load the module for the Human Interface Device. Once loaded we can then load the individual modules for the keyboard and mouse.
MOUSEDEV This is the USB mouse device module KEYBDEV This is the USB keyboard device module
Once loaded we can configure applications such as Xfree and our desired window manager (Gnome or KDE) to make use of these devices.
There are other forms of input devices such as barcode scanners, gamepads, and joysticks. These input devices are beyond the scope of this document. For more information regarding the configuration of these devices, consult the documentation of your GNU/Linux distribution.
As we looked at earlier, there are key modules must be loaded for USB input devices to function. Other USB devices also use a couple of these modules. Hardware such as USB hard disks and USB jump drives make use of the usbcore module and the host controller module in order to communicate with the system. With these modules loaded, most USB drives are automatically detected and create device entries for themselves dynamically. This is because these devices use what is known as SCSI emulation over their USB interface.
Most modern Linux systems either come with SCSI emulation compiled into the kernel or have the option to do so. This allows devices that use a serial chain to act as SCSI disks present on the system. When we plug these devices into the USB bus, a dynamic device entry is created and a message is sent to the kernel telling us the detection and entry of the device. To view this (and any) kernel message, we would simply run the command dmesg.
The dmesg command is used to examine the output of the kernel control buffer. This buffer tracks events on the system such as boot messages and state changes. Upon plugging a USB device into the system, demsg might show an entry for the device as follows:
Initializing USB Mass Storage driver... usb.c: registered new driver usb-storage scsi1 : SCSI emulation for USB Mass Storage devices Vendor: LEXAR SM Model: READER Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 02 WARNING: USB Mass Storage data integrity not assured USB Mass Storage device found at 3 USB Mass Storage support registered. Attached scsi removable disk sda at scsi1, channel 0, id 0, lun 0 SCSI device sda: 256000 512-byte hdwr sectors (131 MB) sda: Write Protect is off sda: sda1
Most USB and Fire-wire drives are treated in the same method. We can simply plug the device into the system and check the output of dmesg to find what device entry is assigned to it, then mount it as a drive.
As we discussed in Chapter 1, UNIX does not use drive letters for its file system. Instead it uses a hierarchy starting from the root of the file system and extending down the tree. This file system can be divided among as many drives and partitions as we wish, though there are some advantages for designing this division among certain guidelines.
We saw earlier the basic breakdown of specific directories and their function. The /usr directory contains programs and shared applications for users. The /var directory contains variable output and logs for the system.
Let us suppose that during the setup and configuration of our system that we created just one partition for the filesystem. What would happen to the operating system if, for example there were excessive amounts of logs filling up in /var/log while we were away from the system? Most likely the system would keep filling up the logs and take up the entire space of the file system until the computer simply keeled over. The same idea could be applied to almost any directory on the system that has a significant amount of activity on it. For this reason, it is very useful to plan out the structure of our filesystem.
Let us imagine we have two hard disks: a 10Gb drive and a 20Gb drive. There are many approaches we can take in organizing the file structure. Let's take a simple a practical approach.
We will divide the first drive into three partitions, a root (/) partition, a var (/var) partition, and a swap partition. We will make the second drive one large user (/usr) partition. The division of the drive might be something like the following:
Device: Mount-Point: FileSystem Type: Mount Options: Size: /dev/hda1 / ext3 defaults #7gb /dev/hda2 /var ext3 defaults #2gb /dev/hda3 none swap defaults #1gb /dev/hdb1 /usr ext3 defaults #20gb
This is an example of a file system table for Linux. This table keeps track of the drives detected in a system, and assign 'mount points' to them appropriately. We can specify a variety of different devices, file systems (even virtual file systems) to be mounted here. The file system table for most Unix systems is stored in /etc/fstab. This file is read during boot and from the mount command. We can manually place entries in fstab for devices such as our USB drives, and file systems such as NFS or SMBFS.
Here are some example entries for such devices and file systems:
Device: Mount-Point: FileSystem Type: Mount Options: Size: /dev/sda1 /mnt/usb vfat default #256mb host:/usr/src /usr/src nfs default #4gb //host/share /usr/share smbfs default #6gb
Here we see an entry for our USB Jump Drive first. Notice the filesystem type is set for vfat. This is because the Jump Drive might be formatted with a Windows system. Because Linux has support for multiple types a file systems, we can mount this as a vfat drive into our local directory /mnt/usb. The next entry is an example of mounting a remote NFS 'share'. In this case the remote host and share is referred to as host:/usr/src, which replaces the device entry. We specify the filesystem and simply mount it where we choose, in this case we chose the directory /usr/src. The final entry is an example of mounting a Samba share from another host. The syntax for the device entry differs a bit from the NFS entry. In this case we pass the double slash (//host/share) in front of the host, then specify the share. Just like any other file system, we can mount it where we please. In this case we chose /usr/share.
We can also manually mount and dismount file systems too. Using the mount and umount commands allow us to temporarily mount devices, move devices to new mount points, and unmount devices for removal or maintenance on the system.
If an entry already exists in /etc/fstab, we can mount the desired device by just specifying the mount point listed in fstab. Here is an example:
user@host $ mount /mnt/usb
Without an entry in fstab, we need to pass the device entry into our command when attempting to mount the desired device. Here is an example:
user@host $ mount /dev/sda1 /mnt/usb
This uses the device entry and tells us where to mount that device. Remember we can mount the device anywhere in the filesystem we wish.
Sometimes we might wish to take down a file system for routine maintenance such as a file system check. Let us say that there are plenty of logs getting written often to /var. These logs are often checked and removed when completed. After a while the partition becomes fragmented. We need to umount the partition and run fsck on it to clean up the file system. We would simply:
user@host $ umount /var && fsck /dev/hda2
We can pass multiple commands on one line by separating them with the && clause. In this case we unmounted the partition where /var exists, and ran a basic filesystem check of the partition. There are more options that we can use with the fsck that we will not cover in this document. Please refer to the fsck man page for more information on these options.
Another task we might need to perform at another time is to add a drive to the system. Depending on where we want to mount this drive in the file system, this is not such a difficult task as we might be lead to believe.
At this point let us imagine we are adding another IDE drive, 80Gb, to the system. This device would most likely be detected by the kernel as hdc. We can check dmesg to find out what device has been assigned to the system. After finding the kernel message that states what device ID was given to the new drive, we can fdisk the drive to create a partition on it.
user@host $ fdisk /dev/hdc
This will give us a few options to perform on the drive. In this case we will create a new partition on the drive by using the 'n' option. We will be prompted to choose the partition type, primary or extended. Choosing primary, we will then set a partition ID. The partition ID defines what filesystem will be created on the drive. Once the partition ID is defined, we will use the 'w' option to write out the partition information to the disk.
After writing the information out, we will then create a file system on the disk. There are a couple of tools we can use here. In this case we will use mkfs.ext3. This will format out the drive with the extension 3 filesystem. Once the drive has been formatted, we can now mount it up. All we need to do is create an entry for the new disk in /etc/fstab:
Device: Mount-Point: FileSystem Type: Mount Options: Size: /dev/hdc1 /storage ext3 default #80gb
In this case, we chose the mount-point /storage. Once the entry is created, simply mount the drive by passing its mount-point to the command
user@host $ mount /storage
Hopefully this document has helped in understanding the basics of a UNIX system, and more specifically the Linux kernel. Armed with these basics, it should be easier to understand the intricacies of individual utilities, tools, and software not covered in this document. It has been the goal of this workshop to gain a better understanding of the logical structure of UNIX in order to better administer, maintain, and troubleshoot a GNU/Linux based system.
Tutorial Home >