Skip Navigation Links | |
Exit Print View | |
Booting and Shutting Down Oracle Solaris 11.1 Systems Oracle Solaris 11.1 Information Library |
1. Booting and Shutting Down a System (Overview)
What's New in Booting and Shutting Down a System
x86: GRUB 2 Is the Default Boot Loader
x86: Support for 64-Bit UEFI Firmware
Support for Booting From GPT Labeled Disks
Large Disk Installation Support
Support for Creating Boot Partitions Based on Firmware Type With the zpool create Command
SPARC: End of Support for Most sun4u Platforms
Guidelines for Booting a System
Overview of the Oracle Solaris Boot Architecture
Description of the Oracle Solaris Boot Archives
Description of the Boot Process
x86: Differences Between UEFI and BIOS Boot Methods
x86: Creating Boot Partitions That Support Systems With UEFI and BIOS Firmware
Service Management Facility and Booting
Changes in Boot Behavior When Using SMF
2. x86: Administering the GRand Unified Bootloader (Tasks)
3. Shutting Down a System (Tasks)
5. Booting a System From the Network (Tasks)
This section describes the basic boot process on the SPARC and x86 platforms. For more information about boot processes on specific hardware types, including systems that have service processors and system that have multiple physical domains, see the product documentation for your specific hardware at http://www.oracle.com/technetwork/indexes/documentation/index.html.
The process of loading and executing a stand-alone program is called bootstrapping. Typically, the stand-alone program is the operating system kernel. However, any stand-alone program can be booted instead of the kernel.
On SPARC platforms, the bootstrapping process consists of the following basic phases:
After you turn on a system, the system firmware (PROM) executes a power-on self-test (POST).
After the test has been successfully completed, the firmware attempts to autoboot, if the appropriate flag has been set in the non-volatile storage area that is used by the machine's firmware.
The second-level program is either a file system-specific boot block, when you booting from a disk, or inetboot or wanboot, when you are booting across the network or using the Automated Installer (AI).
On x86 based systems, the bootstrapping process consists of two conceptually distinct phases, kernel loading and kernel initialization. Kernel loading is implemented by GRUB by using the firmware on the system board and firmware extensions in ROMs on peripheral boards. The system firmware loads GRUB. The loading mechanism differs, depending on the type of system firmware that is shipped on the system board.
After a PC-compatible system is turned on, the system's firmware executes a power-on self (POST), locates and installs firmware extensions from peripheral board ROMS, and then begins the boot process through a firmware-specific mechanism.
For systems with BIOS firmware, the first physical sector of a hard disk (known as the boot sector) is loaded into memory and its code is executed. Disks that are partitioned with the GUID Partition Table (GPT) must have boot sector code that behaves differently, loading code from another location, because the GPT scheme does not reserve the first sector of each partition for boot sector code storage. In the case where GRUB is running on BIOS firmware, that other location is a dedicated partition, which is known as the BIOS Boot Partition. After the GRUB boot sector code loads the rest of GRUB into memory, the boot process continues.
The boot program then loads the next stage, which in the case of Oracle Solaris, is GRUB itself. Booting from the network involves a different process on systems with BIOS firmware. See Chapter 5, Booting a System From the Network (Tasks).
For systems with UEFI-based firmware, the boot process differs significantly. The UEFI firmware searches for the EFI System Partition (ESP) on disks that it has enumerated and then loads and executes UEFI boot programs according to a UEFI-specification-defined process, which results in a UEFI boot application being loaded into memory and executed. On Oracle Solaris, that UEFI boot application is GRUB. The version of GRUB in this release is built to run as a UEFI boot application. The boot process then continues as it does on systems with BIOS firmware.
For more information about boot processes on specific hardware types, including systems that have service processors and systems that have multiple physical domains, see the product documentation for your specific hardware at http://www.oracle.com/technetwork/indexes/documentation/index.html.
GRUB 2 is capable of booting systems with both BIOS and UEFI firmware, as well as GPT labeled disks. To support boot on UEFI firmware and BIOS firmware, GRUB 2 is built targeting two different platforms: i386-pc (BIOS) and x86_64-efi (64-bit UEFI 2.1+) and is therefore delivered as two discrete sets of binaries.
When booting an x86 based system, note the following differences between UEFI-targeted and BIOS-targeted systems:
Command differences – Certain commands that are used by the BIOS boot method are not available on UEFI firmware. Likewise, certain UEFI commands are not available on systems that support the BIOS boot method.
PXE network boot differences – Changes have been made to the DHCP server configuration to support booting systems with UEFI firmware from the network. These changes include support for the new UEFI client architecture identifier value (DHCP option 93).
Note - Systems that can be configured to boot by using either UEFI firmware or the BIOS boot method will technically work with Oracle Solaris. GRUB is first installed according to the system firmware type at the time of installation (or image-update). While you can run explicit commands to install GRUB in the boot location that is required by the other firmware type, this method is not supported. Systems with a particular firmware type should not be reconfigured to boot by using an alternate firmware type after installing Oracle Solaris.
A new -B option has been added to the zpool create command. When a whole disk is passed to the zpool create create command, the -B option causes the zpool command to partition the specified device with two partitions: the first partition is a firmware-specific boot partition, and the second partition is the ZFS data partition. This option also is used to create the required boot partition when adding or attaching a whole disk vdev to an existing rpool, if necessary. The conditions under which the bootfs property is allowed have also been modified. Setting the bootfs property to identify the bootable dataset on a pool is allowed, if all system and disk labeling requirements are met on the pool. As part of the labeling requirement, the required boot partition must also be present. For more information, see Managing Your ZFS Root Pool in Oracle Solaris 11.1 Administration: ZFS File Systems.