A Detailed Look at the InstallRoot Process


This document attempts to provide an explanation of how machines are installed (but not how they are configured for installation) from switch-on to the first-phase RPM installation, and covers the following areas:

Normal Boot Procedure

When a machine is powered on (or rebooted), the processor looks at the end of the system memory for the BIOS, and runs it (the BIOS provides the lowest level interface to peripheral devices and controls the first step of the boot process). When run, the BIOS does some low-level checking, including finding out which devices are bootable. Having selected a boot device, the BIOS loads the first physical sector from that device into memory and transfers CPU execution to the start of that memory address.

If the boot device is a local hard disk, the sector loaded is the MBR - Master Boot Record - which then loads the boot sector into memory and passes control of the boot process to it. Up to this point, the boot process is OS and disk-format independent.

The first-stage boot-loader (installed via the MBR) then locates the second-stage boot-loader (normally GRUB when booting under DICE), and loads and runs that which, in turn, loads and runs the kernel (passing control to that kernel).

The Install Process and the role of InstallRoot

When installing a DICE machine, at least two methods are possible: Each of these methods uses an appropriate version of a boot-loader from the Linux suite of boot-loaders collectively known as SYSLINUX (relevant versions are ISOLINUX in the case of CD installation, and PXELINUX for network installation via PXE).

Both methods involve an initial BIOS utility to select the boot method, followed by the loading of the relevant type of SYSLINUX boot-loader - which then prompts for installation method or version, and goes on to load the kernel image and hand control over to it.

Once the kernel is loaded and run, it starts the installroot process - which prompts for the type of installation required (source media or OS version), and the machine configuration and installation begins.

During the DICE install phase, GRUB is not used - instead, a SYSLINUX image is loaded (the isolinux boot loader, isolinux.bin, directly from the CD - or the pxelinux boot-loader, pxelinux.0, over the network). The version of isolinux.bin on CD is copied from /usr/lib/syslinux/isolinux.bin (provided by the Redhat RPM syslinux-3.10), and is part of the CD image created by the BuildInstallRoot process.

Install procedure from CD

The BIOS initially treats an install CD like a floppy, and expects to find a bootable floppy image on it. This "virtual boot floppy" is assumed to have CDROM drivers on it, which can somehow access the CD in full - however, this requires more configuration information to be supplied. This is where ISOLINUX helps, as it simplifies booting from a CD because a special floppy image is no longer needed - it can read all the files on an ISO9660 format CD.


ISOLINUX is a boot loader for Linux/i386 that uses the ISO 9660/El Torito boot standard in "no emulation" mode. This avoids the need to create an emulation disk image within a fixed amount of space (when using "floppy emulation" mode), and consequently makes the entire filesystem (contents of CD) available. The isolinux source code describes itself as:
A program to boot Linux kernels off a CD-ROM using the El Torito boot standard in "no emulation" mode, making the entire filesystem available. It is based on the SYSLINUX boot loader for MS-DOS floppies.


  1. system powers on, and BIOS is run
  2. F12 interrupts boot process, and BIOS "Boot Device Menu" is displayed
  3. "CD-ROM" selected from boot options
  4. isolinux boot-loader, isolinux.bin, is loaded from CD
    (it displays the header "ISOLINUX <version> <date>...")
  5. isolinux boot-loader runs (behaviour dictated by isolinux.cfg) and presents LCFG boot options
    (displaying the header "LCFG (ng)...", and options contained in boot.msg)
  6. "cd" selected from boot options
  7. boot loader loads compressed vmlinux (/boot/vmlinuz) image.
  8. boot loader unpacks and runs the kernel
    (displaying the message "Uncompressing Linux... Ok, booting the kernel")
Note that access to isolinux.bin on the CD is configured at CD-creation time... the mkisofs command is given appropriate arguments, which include:
  -b	isolinux/isolinux.bin
  -c	isolinux/boot.cat
the "-b" option is the one that specifies the boot location for an "El Torito" bootable CD, and the "-c" option specifies the boot catalog (list of bootable images).

Initial install process

During the install process, the system starts to boot normally - but the BIOS is interrupted (via F12 on DELL machines) and the BIOS "Boot Device Menu" is displayed. The "IDE CD-ROM" option is then selected (although it could equally well be a SATA or SCSI device), which causes the ISOLINUX boot loader to be loaded into memory from CD. Note that, if the boot sequence is set to use the CD first, no manual intervention is required and ISOLINUX boot-loader loads directly from the install CD. The isolinux boot loader is the only loader used in this situation - it loads all of the boot image from CD.

The BIOS, by means of interrupts (INT 13, INT 19), accesses the "Booting Catalog" in the CD-ROM header and verifies the existence of a boot image on the CD ROM. If verified, it reads the Initial/Default Entry and boots from the disk image specified in this entry which points to isolinux.bin (details of how this happens need not concern us here!). The default behaviour when booting from a CD is to redirect boot floppy access requests to the CD - but the use of ISOLINUX makes this unnecessary.

Once the isolinux boot-loader has loaded, it parses the config file /isolinux/isolinux.cfg (supplied on CD, and created as part of the BuildInstallRoot process) to find out what install options are available, and how to implement them (as already mentioned, ISOLINUX can read all the files on an ISO9660 format CD).

serial 0 9600
default cd
prompt 1
timeout 600
display boot.msg

label cd
  kernel vmlinuz
  append root=/dev/hdc sushell=/etc/rc_install ramdisk_size=8192
Using the above configuration file, the boot options are given (as listed in the local version of /isolinux/boot.msg, retained as /var/lcfg/conf/installroot/boot.msg, after install):
Type one of the following :-

    <CR>         boot off CD
    cd           boot off CD
    serial       boot off CD (serial console)
    floppy       boot off CD (IP configuration via floppy)
    rescue       boot off CD (drops to shell)
    disk         boot off hard disk
    hd[a-c]      boot off hd[a-c]
- and the "boot:" prompt is displayed ("prompt 1") for 60 seconds ("timeout 600"), and if nothing is typed, it boots vmlinuz from the CD ("default cd") setting the root device (root=/dev/hdc, the kernel's path to an IDE CD drive - note that this is hardware specific, and could easily be /dev/hda, or /dev/sr0) and the size of the RAM disk (ramdisk_size=8192, because the size must be specified if greater than 4Mb uncompressed).

There is currently no option for booting from a USB device (memory stick); this is an omission, and there should be an entry in isolinux.cfg.

The boot-loader (isolinux) then loads the (customised) compressed kernel image (/isolinux/vmlinuz) into memory (retained as /boot/vmlinuz after install). The customised kernel is generated by the Managed Platform Unit.

NOTE: the ramdisk image (/boot/initrd-<version-build-OS>.img) is not needed for CD install, but is used via PXE (to load drivers necessary to boot the system).

Once the kernel is loaded into memory, the boot-loader hands control of the boot process to it:

Uncompressing Linux... Ok. Booting the kernel
which then initializes and configures the memory and various hardware attached to the system.

Install procedure via PXE

PXELINUX is a SYSLINUX derivative, used to boot Linux off a network server, using a network ROM conforming to the Intel PXE (Pre-Execution Environment) specification. PXE is a network boot method, and uses a bootp request to get an IP address and other network information, plus a boot-loader program, to the client. To do this, either a bootp server or a DHCP server plus an associated TFTP server can be used (in the latter case PXE can be thought of as a DHCP extension). Because PXE does use bootp, a TFTP server needs to run on the same machine which hosts the DHCP server.

If the boot fails, PXELINUX (unlike SYSLINUX) will not wait forever - if it has not received any input for approximately five minutes after displaying an error message, it will reset the machine. This allows an unattended machine to recover in case it tried to boot from a unresponsive server.


  1. system powers on, and BIOS is run
  2. F12 interrupts boot process, and BIOS "Boot Device Menu" is displayed
  3. "Integrated NIC" selected from boot options
  4. pxelinux boot-loader, pxelinux.0, is loaded from TFTP server ("PXELINUX <version> <date>")...
  5. pxelinux boot-loader runs (behaviour dictated by pxelinux.cfg/81D4) and displays LCFG boot options offered via lcfgbootblurb.
  6. "<OS version>" selected from boot options
  7. boot loader loads compressed vmlinux (/boot/vmlinuz) and initrd.gz images.
  8. boot loader unpacks kernel and ram disk images, and runs the kernel ("Uncompressing Linux... Ok, booting the kernel")

Initial install process

When installing over the network, the normal boot process is interrupted (via F12 on DELL machines) and the BIOS "Boot Device Menu" is displayed. The "Integrated NIC" option is selected, which causes the PXELINUX boot loader to be loaded into memory ("PXELINUX <version> <date>" banner displayed). Note that, if the boot sequence is set to attempt a network boot first, no manual intervention is required and PXELINUX boot-loader loads directly from the network.

The PXELINUX boot-loader /tftpboot/pxelinux.0 is downloaded via TFTP from DHCP/TFTP server (the ".0" extension is recognised by PXELINUX as a PXE bootstrap program file).

Behaviour of this boot-loader is controlled by associated configuration information, which pxelinux.0 looks for in a filename determined by the IP address of the client it is running on. The current default is to use /tftpboot/pxelinux.cfg/81D7 (for anything on 129.215):

default fc5
timeout 100
prompt 1
display lcfgbootblurb
allowoptions 1

label fc5
  kernel /kernel-pxe-install-fc5/vmlinuz
  append initrd=/kernel-pxe-install-fc5/initrd.gz rw NFS_DIR=/export/linux/installroot/fc5
Using the above configuration file, the boot options are given (as listed in /tftpboot/lcfgbootblurb):
fc3:- boot to the fc3 install system
fc5:- boot to the fc5 install system
fc6:- boot to the fc6 install system
fc664:- boot to the fc6 (x86_64) install system
fc3serial:- boot to the fc3 install system with serial console
fc5serial:- boot to the fc5 install system with serial console
fc6serial:- boot to the fc6 install system with serial console
fc664serial:- boot to the fc6 (x86_64) install system with serial console
and the "boot:" prompt displayed ("prompt 1") for 10 seconds ("timeout 100").

If the default is taken, or "fc5" is selected, the boot-loader loads /kernel-pxe-install-fc5/vmlinuz and /kernel-pxe-install-fc5/initrd.gz (which contains required drivers, especially all the network drivers), also setting the NFS root directory, NFS_DIR=/export/linux/installroot/fc5.

Once the kernel is loaded into memory, the boot-loader hands control of the boot process to it:

Uncompressing Linux... Ok. Booting the kernel
which then initializes and configures the memory and various hardware attached to the system.

Initial Kernel Run

Once the kernel has been loaded, whether from CD or via the network, the subsequent behaviour is the same.

When the kernel is loaded, it immediately initializes and configures the memory and the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory.

At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with it. In order to set up the user environment, the kernel executes the /sbin/init program.

The /sbin/init program then co-ordinates the rest of the boot process (via /etc/inittab) and configures the environment. [RHL9]

Once the kernel is running and has access to a filesystem, the init process runs, and the /etc/inittab file is consulted - which (in the install case) contains an instruction to start a single-user shell and wait until it exits:

When sulogin runs, it looks for the $sushell (or $SUSHELL) environment variable to determine what shell to start, and then starts it. If using ISOLINUX, this variable was passed into the kernel environment by the isolinux boot-loader using a setting from the isolinux.cfg file: "sushell=/etc/rc_install".

However, there does not appear to be a similar setting in the corresponding pxelinux.cfg/81D4 file, so how is this set when using PXELINUX?

The sulogin program is run first, rather than /etc/rc_install directly, so that job control is available (and ^C!). The root account is given a null password so that the user isn't prompted for a password by sulogin (no password is requested if /etc/passwd doesn't exist).

Once this is done, it hands over to the installroot (effectively the rc_install script), issuing a message to this effect:

INIT: version 2.86 booting
(last thing reported by /sbin/init - at this point filesystems are mounted read-only) and:
----------  University of Edinburgh LCFG installroot  ----------
Version:              0.99.35
Date:                 17/02/06 15:40
(first thing reported by /etc/rc_install).

The InstallRoot Process

The InstallRoot process consists of two distinct phases: initialisation (running the rc_install script) and configuration (running the 'install' component and associated 'install' methods, called from the rc_install script).

The rc_install Script

This script currently consists of 28 subroutines: - plus a few additional commands.

After issuing a banner giving version and date for the current installroot (as mentioned above), some initial configuration is carried out, and the above subroutines are called as required:

  1. The installroot build date is displayed (contained in /build.timestamp at install time).

  2. LoadLocalCFG() is called to set environment variables such as locale and timezone, which are set from /etc/installparams.default (and over-ridden by /etc/installparams, if present).

  3. Output device is then determined (for example, /dev/console or /dev/tty2)

  4. MakeDevRW() is called to create a writeable filesystem in RAM disk, and mounted under /dev. Contents are then copied (via rsync) from the RO location (the boot image) to the writeable directory.

  5. /proc & /sys are also created, just after /dev, to allow other configuration tools to communicate with the kernel (for example, the dynamic device management utility, udev).

  6. InitLoopback() is called to configure the loopback interface (via ifconfig) and made available (via "route add").

  7. The other writeable filesystem locations (/var, and /etc) are created by calling MakeVarRW() & MakeEtcRW(), populated (using rsync), and then remounted (the remainder of the filesystem is still mounted RO from CD, or over the network).

  8. The kernel keymap for the console is loaded (via loadkeys) by LoadKeys(). This uses $KEYS_LOCALE, which was loaded from /etc/installparams.

  9. Access methods for various services are set up from NssSwitch(), via nsswitch.conf. Other than hosts (which has the option of using DNS), all services are set to use local files.

  10. SetTZ() is called, and timezone is set using $DEFAULT_TZ from /etc/installparams (loaded earlier).

  11. Explicit kernel module dependencies are set up by DepMod(), which creates /etc/modules.conf (using entries supplied in the rc_install script).

  12. Once the module dependencies have been analysed, various modules are loaded into the kernel by ModProbeClass(USB), and then ProbeforSCSIDisk(), ProbeforIDEDisk(), & ProbeforRAIDDisk().

  13. ProbeforEther() then tests to see if a configured ethernet network card is present (as determined by ifconfig), and if not, the relevant kernel modules (as determined by kmodule or kudzu) are loaded by ProbeforEtherModule() and a configured ethernet network interface is again tested for. If not found, ProbeforEtherPCMCIA() is called to test for a PCMCIA card.

  14. If $LCFG_RESCUE has been defined (selecting the "rescue" option from the boot menu does this), the system drops to a shell. Note that this is done before the install methods are called.

  15. If the network configuration above failed (no ethernet card found), the install process finishes and the system drops to a shell.
Assuming that all the above actions had been successful, the installation process now proceeds to the next phase of initialisation.
  1. The currently configured hostname is displayed

  2. If CD booting "with IP configuration via floppy" was requested at boot time (which sets $LCFG_FLOPPY), then LoadFloppyIP() is called to check for IP configuration information supplied in the file ip.cfg on the floppy.

  3. Otherwise, if $LCFG_USBDISK is defined, LoadUSBDiskIP() is called to start D-BUS (system-wide message bus) and HAL (Hardware Abstraction Layer) daemons and check if /media/usbdisk appears, it also sources ip.cfg (if present on USB disk) to load network values.

  4. If IP configuration information is not already available (for example, if $DHCP_IPADDR has not been given a value from user-supplied ip.cfg file on floppy or USB device) IP information is requested from a local DHCP server (via /sbin/dhclient, which broadcasts a DHCP request on all local networks and saves the response), having configured each network interface "up" if required (although no hostname is set).

    The server-based configuration information in held in /etc/dhcpd.conf. As well as the host name and IP address, this file contains information such as the network-mounted root filesystem, and location of host profiles.

    The response from the DHCP server is saved in /tmp/dhcp_response, the location of which is determined via /etc/dhclient.conf, which explicitly calls /sbin/dhclient-lcfg-script (would normally default to /sbin/dhclient-script), and this creates the above file based on the DHCP response.

    The /sbin/dhclient-lcfg-script expects instantiated variables which it uses to set LCFG DHCP variables in the response file: /sbin/dhclient exports these to the environment before calling the /sbin/dhclient-lcfg-script. Drop to shell if this fails, otherwise source saved DHCP response - which sets various DHCP variables, including $DHCP_IPADDR, $DHCP_LCFGURL & $DHCP_INTERFACE.

  5. Add IP, netmask, and broadcast address to $DHCP_INTERFACE (from dhclient output).
    Add route to local subnet.
    Add default gateway.

  6. Next, ConfigureResolver() is called to configure the resolver by creating /etc/resolv.conf and /etc/hesiod.conf, constructing the entries using DHCP variables.

  7. Then SetHostname() is called to set hostname (via /bin/ipcalc and IP address, which sets $HOSTNAME), dropping to shell if this fails.

  8. The /etc/hosts file is generated by CreateEtcHosts(), containing localhost entry and one for $DHCP_IPADDR.
At this point, the machine's profile can be downloaded (from http://lcfghost.inf.ed.ac.uk/profiles/inf.ed.ac.uk/<host>/XML/profile.xml, which is equivalent to /var/lcfg/conf/server/web/profiles/<host>/XML/profile.xml), and installation of software can begin. The web location of the profiles is equivalent to the filesystem location /var/lcfg/conf/server/web/profiles on the LCFG server. (Note that XML files do not exist for non-DICE machines - for example, self-managed machines - even though they may be in the inf.ed.ac.uk domain.) A machine's profile is only downloaded once, at this point (by the /etc/rc_install script, before the install component is called):
  1. DownloadProfile() is called and, if $DHCP_LCFGURL has been defined (usually http://lcfghost.inf.ed.ac.uk/profiles) and validated by ValidateURL(), it uses "/usr/lib/lcfg/components/client install $DHCP_LCFGURL /" to download the XML to the /var/lcfg/conf/profile/xml directory (note that no hostname is specified - this is determined automagically by lcfghost)
    How is the hostname determined?

  2. Prompt for "(I)nstall, (D)ebug, (S)hell, (P)atchup, (R)eboot".
    (The "Debug" option can be used to single-step through the install process.)

  3. Assuming "Install" is chosen, the install component (/usr/lib/lcfg/components/install) is run - which calls each component with the "install" method. Components (other than "install") available at installroot time are:
    • auth
    • client
    • dns
    • file
    • fstab
    • grub
    • hardware
    • init
    • kernel
    • logserver
    • network
    • ngeneric
    • nsswitch
    • rpmcache
    • updaterpms
    (the above list is not in execution order).

  4. DoReboot() is called to reboot on completion.

The Install Component

Each of the components provided at install time (as listed above) performs some initialisation tasks when first run. When the install component (which only exists at install-time) is run with the 'install' method (which is almost the last thing the rc_install script does), the following actions are taken:

Note that the install resources (including install.installmethods) are defined in install.h, which is part of the lcfg/core/include/lcfg/defaults structure.
How does this get pulled in? Presumably it's #included in another header file somewhere? (It's certainly #included from lcfg/core/include/lcfg/defaults.h.)

- once the install methods have been identified (as defined by the install.installmethods resource, which is used to set $LCFG_install_installmethods via "qxprof -e"):

% qxprof install.installmethods
- each is called in turn (in the order listed in the install.installmethods resource). For each install method above, there is a corresponding environment variable ($LCFG_install_imethod_<method>) and resource (install.imethod_<method>) holding the command to execute. Actions to execute can be of three types: For example:

An implicit command:

An explicit command:

A component call:

The Install Methods

Each install method is described below, in the order in which they are called. Note that not all are methods as defined in a component - some are one-off constructed commands. Where an install method is defined in a component, or calls a method defined in a component (both marked "*"), the major actions of that method are described (an overview is given). For a detailed description, refer to the source code for that component.


Expands to "%oneshot% setctx install=true", which: Context values are stored in the .context file under /var/lcfg/conf/profile/context. On a fully running machine the lcfg-client daemon (rdxprof) would notice that the context had changed when setctx was called, but in the installroot the daemon isn't running so the client component has to be called explicitly to change to the new context.


Expands to "%oneshot% "echo lcfg:x:980:980:LCFG user:/tmp:/bin/false >> /etc/passwd"", which:


Expands to "%oneshot% "echo lcfg:x:980: >> /etc/group"", which:

loadctx *

Expands to "client install none:", which: What, exactly, does this do? Nothing extra added to the .context file in /var/lcfg/conf/profile/context.


Expands to "%gettime% ntpdate ntp0.inf.ed.ac.uk ntp1.inf.ed.ac.uk ntp2.inf.ed.ac.uk", which:


Expands to "%oneshot% touch /etc/fstab /etc/mtab", which:

logserver *

Expands to "logserver start", which:

fstab_disks *

Expands to "fstab preparedisks /root", which - for each disk in fstab.disks resource:


Expands to "%oneshot% mkdir -p /root/var/lib/rpm", which:


Expands to "%oneshot% mkdir -p /root/var/lcfg/conf", which:


Expands to "%oneshot% mkdir /root/dev", which:


Expands to "%oneshot% mkdir -p /root/var/tmp", which:


Expands to "%oneshot% mkdir /root/etc", which:


Expands to "%oneshot% touch /root/etc/fstab /root/etc/mtab", which:

nsswitch *

Expands to "nsswitch install /root", which:

auth *

Expands to "auth install /root", which:

fstab *

Expands to "fstab install /root", which:

hardware *

Expands to "hardware install /root", which:


Expands to "%oneshot% /usr/sbin/rdxprof -u $DHCP_LCFGURL diceinstallbase-fc5-develop", which: This creates:

- and also appropriate versions of diceinstallbase-fc5-develop in the dbm, rpmcfg, and xml directories.


Expands to "%oneshot% rsync -a /dev/. /root/dev/.", which:


Expands to "%oneshot% mkdir -p /var/lib/rpm", which:

updaterpms *

Expands to "updaterpms install /root", which: This installs the first-phase set of 300+ RPMs, and reboots (although not in debug mode).
Is this why LCFG kernel fails with "Can't find kernel package" at this point in debug mode?

network *

Expands to "network install /root", which:

dns *

Expands to "dns install /root $DHCP_DOMAINSERVERS" (where $DHCP_DOMAINSERVERS is normally, which:

grub *

Expands to "grub install /root" which:

kernel *

Expands to "kernel install /root", which:


Expands to "%oneshot% setctx install=" which:


Expands to "%oneshot% setctx installbase=true", which:

clienttarget *

Expands to "client install file: /root", which is used at install time to get initial profile, and:


Expands to "init install /root", which:


Expands to "%settz% /root", which:


Expands to "%setclock%", which:


Expands to "%configclock% /root", which:


Expands to "%oneshot% mkdir -p /root/var/lcfg/installlog", which:


Expands to "%oneshot% rsync -a /var/lcfg/log/. /root/var/lcfg/installlog/.", which:


Expands to "%oneshot% umount -al", which:


Expands to "%oneshot% echo End of install", which:

And Finally...

Once the install component and relevant methods have been run, the last thing that the rc_install script does is to call DoReboot(), which kills all processes, unmounts filesystems, and calls /sbin/reboot. The system reboots, installing rest of the 2500+ RPMs on way back up.

Who Looks After What

The following list identifies the main files, packages, and RPMs mentioned above, and shows their relationship to the InstallRoot process, their status, and how they are maintained.

Responsibilty for files & RPMs
File(s) Source (container or generator) of file(s)
(short description)
Location at install time
Location on installed system

isolinux.cfg lcfg-buildinstallroot-0.99.23
(configuration file for booting CD)
Install CD
MP Unit

isolinux.bin syslinux-3.10
(bootloader executable)
Install CD
Red Hat (Standard Distribution)

boot.msg lcfg-buildinstallroot-0.99.23
(display options)
Install CD
MP Unit

vmlinuz /usr/sbin/buildinstallroot
Install CD
MP Unit

pxelinux.0 syslinux-3.10
DHCP server
/usr/lib/syslinux/, link from /tftpboot/ (DHCP server only)
Red Hat (Standard Distribution)

pxelinux.cfg/ pxelinux component
TFTP/DHCP server
/tftpboot/ (DHCP server only)
MP Unit

initrd.img kernel-smp-2.6.18-1.2257_FC5_dice_1.2
Install CD
MP Unit

dhcpd.conf dhcp-3.0.3-28 (updates via dhcpd component)
DHCP server
/etc/ (DHCP server only)
MP Unit

Install CD
MP Unit

Install CD
MP Unit

MP Unit

Notes & References

[RHL9] This is for RH9, and is from the Red Hat Linux Reference Guide.