Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: Devices and File Systems Oracle Solaris 11.1 Information Library |
1. Managing Removable Media (Tasks)
2. Writing CDs and DVDs (Tasks)
4. Dynamically Configuring Devices (Tasks)
5. Managing USB Devices (Tasks)
6. Using InfiniBand Devices (Overview/Tasks)
9. Administering Disks (Tasks)
SPARC: Setting up Disks (Task Map)
SPARC: Setting Up Disks for ZFS File Systems
SPARC: How to Set Up a Disk for a ZFS Root File System
SPARC: Creating a Disk Slice for a ZFS Root File System
SPARC: How to Create a Disk Slice for a ZFS Root File System
SPARC: How to Install Boot Blocks for a ZFS Root File System
SPARC: How to Set Up a Disk for a ZFS Non-Root File System
x86: Setting Up Disks for ZFS File Systems (Task Map)
x86: Setting Up Disks for ZFS File Systems
x86: How to Set Up a Disk for a ZFS Root File System
x86: Preparing a Disk for a ZFS Root File System
How to Recreate the ZFS Root Pool (EFI (GPT))
x86: How to Create a Disk Slice for a ZFS Root File System (VTOC)
x86: How to Replace a ZFS Root Pool Disk (EFI (GPT))
x86: How to Replace a ZFS Root Pool Disk (VTOC)
x86: Creating and Changing Solaris fdisk Partitions
x86: Guidelines for Creating an fdisk Partition
x86: How to Create a Solaris fdisk Partition
Changing the fdisk Partition Identifier
How to Change the Solaris fdisk Identifier
11. Configuring Storage Devices With COMSTAR (Tasks)
12. Configuring and Managing the Oracle Solaris Internet Storage Name Service (iSNS)
13. The format Utility (Reference)
14. Managing File Systems (Overview)
15. Creating and Mounting File Systems (Tasks)
16. Configuring Additional Swap Space (Tasks)
17. Copying Files and File Systems (Tasks)
The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system on an x86 based system.
|
Although the procedures that describe how to set up a disk and create an fdisk partition can be used with a ZFS file systems, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris 11.1 Administration: ZFS File Systems.
The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot.
If a root pool disk becomes damaged, you have two ways to recover:
You can reinstall the entire Oracle Solaris OS.
Or, you can replace the root pool disk and restore your file systems from snapshots or from a backup medium. You can reduce system down time due to hardware failures by creating a redundant root pool. The only supported redundant root pool configuration is a mirrored root pool.
A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space.
Or, you can replace a damaged disk in a pool in the following ways:
A disk can be replaced in a non-redundant pool if all the devices are currently ONLINE.
A disk can be replaced in a redundant pool if enough redundancy exists among the other devices.
In a mirrored root pool, you can replace a disk or attach a disk and then detach the failed disk or a smaller disk to increase a pool's size.
In general, setting up a disk on the system depends on the hardware so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.
Refer to your hardware installation guide for information on replacing a disk.
|
Review the following root pool disk requirements:
In most cases, Oracle Solaris 11.1 installs an EFI (GPT) label for the root pool disk or disks. The SMI (VTOC) label is still available and supported. Follow the procedures in this section based on the EFI (GPT) or SMI (VTOC) labeling.
Must be a single disk or be part of mirrored configuration. Neither a non-redundant configuration nor a RAIDZ configuration is supported for the root pool.
All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system.
All Oracle Solaris OS components must reside in the root pool, with the exception of the swap and dump devices.
For x86 systems with a root pool disk that is labeled with EFI, then the correct boot partitions are created automatically, in most cases.
Attempting to use different slices on a disk and share that disk among different operating systems or with a different ZFS storage pool or storage pool components is not recommended.
Use the following procedure if you need to recreate the ZFS root pool or if you want to create an alternate root pool. The zpool create command below automatically creates a EFI (GPT) labeled disk with the correct boot information.
Use the format utility to identify the disks for the root pool.
# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c6t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424> /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@0,0 1. c6t1d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB> /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@1,0 2. c6t2d0 <FUJITSU-MAV2073RCSUN72G-0301-68.37GB> /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@2,0 3. c6t3d0 <FUJITSU-MAV2073RCSUN72G-0301 cyl 14087 alt 2 hd 24 sec 424> /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/sd@3,0 Specify disk (enter its number):
# zpool create -B rpool mirror c1t0d0 c2t0d0
If you want to create an alternate root pool, then using syntax similar to the following:
# zpool create -B rpool2 mirror c1t0d0 c2t0d0 # beadm create -p rpool2 solaris2 # beadm activate -p rpool2 solaris2
For information about complete ZFS root pool recovery, see Chapter 11, Archiving Snapshots and Root Pool Recovery, in Oracle Solaris 11.1 Administration: ZFS File Systems.
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.
For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c8t1d0s0 # cfgadm -c unconfigure c8::dsk/c8t1d0
# cfgadm -c configure c8::dsk/c28t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
For example, the format command shows 4 disks connected to this system.
# format -e AVAILABLE DISK SELECTIONS: 1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0 2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0 3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0 4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
Specify disk (enter its number): 1 selecting c8t1d0 [disk formatted] . . . format>
If the disk has no fdisk partition, you will see a message similar to the following:
format> fdisk No Solaris fdisk partition found.
If so, go to the next step to create an fdisk partition.
If the disk has an EFI fdisk or some other partition type, go to the next step to create a Solaris fdisk partition.
If the disk has a Solaris fdisk partition, go to step 9 to create a disk slice for the root pool.
format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y
If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:
# format -e c8t1d0 selecting c8t1d0 [disk formatted] format> fdisk
Enter Selection: 3 Specify the partition number to delete (or enter 0 to exit): 1 Are you sure you want to delete partition 1? This will make all files and programs in this partition inaccessible (type "y" or "n"). y Partition 1 has been deleted.
Enter Selection: 1 Select the partition type to create: 1 Specify the percentage of disk to use for this partition (or type "c" to specify the size in cylinders). 100 Should this become the active partition? If yes, it will be activated each time the computer is reset or turned on. Please type "y" or "n". y Partition 1 is now the active partition.
Enter Selection: 6 format>
format> partition partition> print
Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.
partition> modify Select partitioning base: 0. Current partition table (default) 1. All Free Hog Choose base (enter number) [0]? 1 Part Tag Flag Cylinders Size Blocks 0 root wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? Free Hog partition[6]? 0 Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 17829 136.58GB (17829/0/0) 286422885 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? yes Enter table name (remember quotes): "c8t0d0" Ready to label disk, continue? yes
# zpool replace rpool c8t1d0s0 # zpool online rpool c8t1d0s0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:
# zpool attach rpool c8t0d0s0 c8t1d0s0
A zpool attach operation on a root pool disk automatically applies the boot blocks.
For example:
# bootadm install-bootloader
A zpool replace operation does not automatically apply the boot blocks.
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.
# zpool detach rpool c8t0d0s0
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.
In Oracle Solaris 11.1, in most cases, an EFI (GPT) disk label is installed on the root pool disk.
For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c8t1d0 # cfgadm -c unconfigure c8::dsk/c8t1d0
# cfgadm -c configure c8::dsk/c8t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
For example, the format command sees 4 disks connected to this system.
# format -e AVAILABLE DISK SELECTIONS: 1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0 2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0 3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0 4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
# zpool replace rpool c8t1d0 # zpool online rpool c8t1d0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:
# zpool attach rpool c8t0d0 c8t1d0
A zpool attach operation on a root pool disk applies the boot blocks automatically.
If your root pool disk contains customized partitions, you might need to use syntax similar to the following:
# zpool attach rpool c8t0d0s0 c8t0d0
For example:
# bootadm install-bootloader
A zpool replace operation on a root pool disk does not apply the boot blocks automatically.
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.
# zpool detach rpool c8t0d0
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps below.
For a full description of fdisk partitions, see x86: Guidelines for Creating an fdisk Partition.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c8t1d0 # cfgadm -c unconfigure c8::dsk/c8t1d0
# cfgadm -c configure c8::dsk/c8t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
For example, the format command sees 4 disks connected to this system.
# format -e AVAILABLE DISK SELECTIONS: 1. c8t0d0 <Sun-STK RAID INT-V1.0 cyl 17830 alt 2 hd 255 sec 63> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@0,0 2. c8t1d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@1,0 3. c8t2d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@2,0 4. c8t3d0 <Sun-STK RAID INT-V1.0-136.61GB> /pci@0,0/pci10de,375@f/pci108e,286@0/disk@3,0
Specify disk (enter its number): 1 selecting c8t1d0 [disk formatted] . . . format>
If the disk has no fdisk partition, you will see a message similar to the following:
format> fdisk No Solaris fdisk partition found.
If so, go to step 4 to create an fdisk partition.
If the disk has an EFI fdisk or some other partition type, go to the next step to create a Solaris fdisk partition.
If the disk has a Solaris fdisk partition, go to step 9 to create a disk slice for the root pool.
format> fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. y
If you print the disk's partition table with the format utility, and you see the partition table refers to the first sector and the size, then this is an EFI partition. You will need to create a Solaris fdisk partition as follows:
Select fdisk from the format options.
# format -e c8t1d0 selecting c8t1d0 [disk formatted] format> fdisk
Delete the existing EFI partition by selecting option 3, Delete a partition.
Enter Selection: 3 Specify the partition number to delete (or enter 0 to exit): 1 Are you sure you want to delete partition 1? This will make all files and programs in this partition inaccessible (type "y" or "n"). y Partition 1 has been deleted.
Create a new Solaris partition by selecting option 1, Create a partition.
Enter Selection: 1 Select the partition type to create: 1 Specify the percentage of disk to use for this partition (or type "c" to specify the size in cylinders). 100 Should this become the active partition? If yes, it will be activated each time the computer is reset or turned on. Please type "y" or "n". y Partition 1 is now the active partition.
Update the disk configuration and exit.
Enter Selection: 6 format>
Display the SMI partition table. If the default partition table is applied, then slice 0 might be 0 in size or it might be too small. See the next step.
format> partition partition> print
Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.
partition> modify Select partitioning base: 0. Current partition table (default) 1. All Free Hog Choose base (enter number) [0]? 1 Part Tag Flag Cylinders Size Blocks 0 root wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? Free Hog partition[6]? 0 Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 17829 136.58GB (17829/0/0) 286422885 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 17829 136.58GB (17830/0/0) 286438950 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? yes Enter table name (remember quotes): "c8t1d0" Ready to label disk, continue? yes
# zpool replace rpool c8t1d0s0 # zpool online rpool c8t1d0s0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:
# zpool attach rpool c8t0d0s0 c8t1d0s0
When using the zpool attach command on a root pool, the boot blocks are applied automatically.
For example:
# bootadm install-bootloader
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.
# zpool detach rpool c8t1d0s0
If you replace a root pool disk with the zpool replace command, you must install the boot loader. The following procedures works for both VTOC and EFI (GPT) labels.
# bootadm install-bootloader
If you need to install the boot loader on an alternate root pool, then use the -P (pool) option.
# bootadm install-bootloader -P rpool2
If you want to install the GRUB Legacy boot loader, you must first remove all GRUB 2 boot environments from your system and then use the installgrub command. For instructions, see Installing GRUB Legacy on a System That Has GRUB 2 Installed in Booting and Shutting Down Oracle Solaris 11.1 Systems.
# init 6
If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI (GPT) Disk Label.
Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 4, Dynamically Configuring Devices (Tasks).
For more information, see How to Use Your Assigned Administrative Rights in Oracle Solaris 11.1 Administration: Security Services.
Refer to the disk's hardware installation guide for details.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline tank c1t1d0 # cfgadm -c unconfigure c1::dsk/c1t1d0 <Physically remove failed disk c1t1d0> <Physically insert replacement disk c1t1d0> # cfgadm -c configure c1::dsk/c1t1d0
On some hardware, you do not to reconfigure the replacement disk after it is inserted.
Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.
# format
# zpool replace tank c1t1d0 # zpool online tank c1t1d0
Confirm that the new disk is resilvering.
# zpool status tank
For example:
# zpool attach tank mirror c1t0d0 c2t0d0
Confirm that the new disk is resilvering.
# zpool status tank
For more information, see Chapter 3, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris 11.1 Administration: ZFS File Systems.