Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: Devices and File Systems Oracle Solaris 11.1 Information Library |
1. Managing Removable Media (Tasks)
2. Writing CDs and DVDs (Tasks)
4. Dynamically Configuring Devices (Tasks)
5. Managing USB Devices (Tasks)
6. Using InfiniBand Devices (Overview/Tasks)
9. Administering Disks (Tasks)
SPARC: Setting up Disks (Task Map)
SPARC: Setting Up Disks for ZFS File Systems
SPARC: How to Set Up a Disk for a ZFS Root File System
SPARC: Creating a Disk Slice for a ZFS Root File System
SPARC: How to Create a Disk Slice for a ZFS Root File System
SPARC: How to Install Boot Blocks for a ZFS Root File System
x86: Setting Up Disks for ZFS File Systems (Task Map)
x86: Setting Up Disks for ZFS File Systems
x86: How to Set Up a Disk for a ZFS Root File System
x86: Preparing a Disk for a ZFS Root File System
How to Recreate the ZFS Root Pool (EFI (GPT))
x86: How to Create a Disk Slice for a ZFS Root File System (VTOC)
x86: How to Replace a ZFS Root Pool Disk (EFI (GPT))
x86: How to Replace a ZFS Root Pool Disk (VTOC)
x86: How to Install Boot Blocks for a ZFS Root File System
x86: How to Set Up a Disk for a ZFS Non-Root File System
x86: Creating and Changing Solaris fdisk Partitions
x86: Guidelines for Creating an fdisk Partition
x86: How to Create a Solaris fdisk Partition
Changing the fdisk Partition Identifier
How to Change the Solaris fdisk Identifier
11. Configuring Storage Devices With COMSTAR (Tasks)
12. Configuring and Managing the Oracle Solaris Internet Storage Name Service (iSNS)
13. The format Utility (Reference)
14. Managing File Systems (Overview)
15. Creating and Mounting File Systems (Tasks)
16. Configuring Additional Swap Space (Tasks)
17. Copying Files and File Systems (Tasks)
The following task map identifies the procedures for setting up a ZFS root pool disk for a ZFS root file system or a non-root ZFS pool disk on a SPARC based system.
|
Although the procedures that describe how to set up a disk can be used with a ZFS file system, a ZFS file system is not directly mapped to a disk or a disk slice. You must create a ZFS storage pool before creating a ZFS file system. For more information, see Oracle Solaris 11.1 Administration: ZFS File Systems.
The root pool contains the root file system that is used to boot the Oracle Solaris OS. If a root pool disk becomes damaged and the root pool is not mirrored, the system might not boot.
If a root pool disk becomes damaged, you have two ways to recover:
You can reinstall the entire Oracle Solaris OS.
Or, you can replace the root pool disk and restore your file systems from snapshots or from a backup medium. You can reduce system down time due to hardware failures by creating a redundant root pool. The only supported redundant root pool configuration is a mirrored root pool.
A disk that is used in a non-root pool usually contains space for user or data files. You can attach or add a another disk to a root pool or a non-root pool for more disk space.
Or, you can replace a damaged disk in a pool in the following ways:
A disk can be replaced in a non-redundant pool, if all of the devices are currently ONLINE.
A disk can be replaced in a redundant pool, if enough redundancy exists among the other devices.
In a mirrored root pool, you can replace a disk or attach a disk and then detach the failed disk or a smaller disk to increase a pool's size.
In general, setting up a disk on the system depends on the hardware, so review your hardware documentation when adding or replacing a disk on your system. If you need to add a disk to an existing controller, then it might just be a matter of inserting the disk in an empty slot, if the system supports hot-plugging. If you need to configure a new controller, see Dynamic Reconfiguration and Hot-Plugging.
Refer to your hardware installation guide for information on replacing a disk.
|
After a few minutes, select option 3 - Shell.
After the disk is connected or replaced, you can create a slice and update the disk label. Go to SPARC: How to Create a Disk Slice for a ZFS Root File System.
You must create a disk slice for a disk that is intended for a ZFS root pool on SPARC systems that do not have GPT-aware firmware. This is a long-standing boot limitation.
Review the following root pool disk requirements:
In Oracle Solaris 11.1, an EFI (GPT) label is installed on a SPARC system with GPT aware firmware and on an x86 system. Otherwise, an SMI (VTOC) label is installed.
Must be a single disk or be part of a mirrored configuration. Neither a non-redundant configuration nor a RAIDZ configuration is supported for the root pool.
All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system.
All Oracle Solaris OS components must reside in the root pool, with the exception of the swap and dump devices.
For a root pool disk that is labeled with VTOC, you should create a disk slice with the bulk of disk space in slice 0, if you need to replace a root pool disk.
Attempting to use different slices on a disk and share that disk among different operating systems or with a different ZFS storage pool or storage pool components is not recommended.
In general, the root pool disk is installed automatically when the system is installed. If you need to replace a root pool disk or attach a new disk as a mirrored root pool disk, see the steps that follow.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c2t1d0s0 # cfgadm -c unconfigure c2::dsk/c2t1d0
# cfgadm -c configure c2::dsk/c2t1d0
On some hardware, you do not have to reconfigure the replacement disk after it is inserted.
For example, the format command shows 4 disks connected to this system.
# format -e AVAILABLE DISK SELECTIONS: 0. c2t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@1c,600000/scsi@2/sd@0,0 1. c2t1d0 <SEAGATE-ST336607LSUN36G-0307-33.92GB> /pci@1c,600000/scsi@2/sd@1,0 2. c2t2d0 <SEAGATE-ST336607LSUN36G-0507-33.92GB> /pci@1c,600000/scsi@2/sd@2,0 3. c2t3d0 <SEAGATE-ST336607LSUN36G-0507-33.92GB> /pci@1c,600000/scsi@2/sd@3,0
For example, the partition (slice) output for c2t1d0 shows that this disk has an EFI label because it identifies first and last sectors.
Specify disk (enter its number): 1 selecting c2t1d0 [disk formatted] format> p PARTITION MENU: 0 - change `0' partition 1 - change `1' partition 2 - change `2' partition 3 - change `3' partition 4 - change `4' partition 5 - change `5' partition 6 - change `6' partition expand - expand label to use whole disk select - select a predefined table modify - modify a predefined partition table name - name the current table print - display the current table label - write partition map and label to the disk !<cmd> - execute <cmd>, then return quit partition> p Current partition table (original): Total disk sectors available: 71116508 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 33.91GB 71116541 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 71116542 8.00MB 71132925 partition>
For example, the c2t1d0 disk is relabeled with an SMI label, but the default partition table does not provide an optimal slice configuration.
partition> label [0] SMI Label [1] EFI Label Specify Label type[1]: 0 Auto configuration via format.dat[no]? Auto configuration via generic SCSI-2[no]? partition> p Current partition table (default): Total disk cylinders available: 24620 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 90 128.37MB (91/0/0) 262899 1 swap wu 91 - 181 128.37MB (91/0/0) 262899 2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 182 - 24619 33.67GB (24438/0/0) 70601382 7 unassigned wm 0 0 (0/0/0) 0 partition>
Set the free hog partition so that all the unallocated disk space is collected in slice 0. Then, press return through the slice size fields to create one large slice 0.
partition> modify Select partitioning base: 0. Current partition table (default) 1. All Free Hog Choose base (enter number) [0]? 1 Part Tag Flag Cylinders Size Blocks 0 root wm 0 0 (0/0/0) 0 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 Do you wish to continue creating a new partition table based on above table[yes]? Free Hog partition[6]? 0 Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '5' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '6' [0b, 0c, 0.00mb, 0.00gb]: Enter size of partition '7' [0b, 0c, 0.00mb, 0.00gb]: Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 24619 33.92GB (24620/0/0) 71127180 1 swap wu 0 0 (0/0/0) 0 2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 Okay to make this the current partition table[yes]? Enter table name (remember quotes): "c2t1d0" Ready to label disk, continue? yes partition> quit format> quit
# zpool replace rpool c2t1d0s0 # zpool online rpool c2t1d0s0
On some hardware, you do not have to online the replacement disk after it is inserted.
If you are attaching a new disk to create a mirrored root pool or attaching a larger disk to replace a smaller disk, use syntax similar to the following:
# zpool attach rpool c2t0d0s0 c2t1d0s0
A zpool attach operation on a root pool disk applies the boot blocks automatically.
For example:
# zpool status rpool # bootadm install-bootloader
A zpool replace operation on a root pool disk does not apply the boot blocks automatically.
This step is only necessary if you attach a new disk to replace a failed disk or a smaller disk.
# zpool detach rpool c2t0d0s0
# bootadm install-bootloader
For more information, see installboot(1M).
# init 6
Example 10-1 SPARC: Installing Boot Blocks for a ZFS Root File System
If you physically replace the disk that is intended for the root pool and the Oracle Solaris OS is then reinstalled, or you attach a new disk for the root pool, the boot blocks are installed automatically. If you replace a disk that is intended for the root pool by using the zpool replace command, then you must install the boot blocks manually so that the system can boot from the replacement disk.
The following example shows how to install boot blocks for a ZFS root file system.
# bootadm install-bootloader
If you are setting up a disk to be used with a non-root ZFS file system, the disk is relabeled automatically when the pool is created or when the disk is added to the pool. If a pool is created with whole disks or when a whole disk is added to a ZFS storage pool, an EFI label is applied. For more information about EFI disk labels, see EFI (GPT) Disk Label.
Generally, most modern bus types support hot-plugging. This means you can insert a disk in an empty slot and the system recognizes it. For more information about hot-plugging devices, see Chapter 4, Dynamically Configuring Devices (Tasks).
Refer to the disk's hardware installation guide for details.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline tank c1t1d0 # cfgadm -c unconfigure c1::dsk/c1t1d0 <Physically remove failed disk c1t1d0> <Physically insert replacement disk c1t1d0> # cfgadm -c configure c1::dsk/c1t1d0
On some hardware, you do not to reconfigure the replacement disk after it is inserted.
Review the output of the format utility to see if the disk is listed under AVAILABLE DISK SELECTIONS. Then, quit the format utility.
# format
# zpool replace tank c1t1d0 # zpool online tank c1t1d0
Confirm that the new disk is resilvering.
# zpool status tank
For example:
# zpool attach tank mirror c1t0d0 c2t0d0
Confirm that the new disk is resilvering.
# zpool status tank
For more information, see Chapter 3, Managing Oracle Solaris ZFS Storage Pools, in Oracle Solaris 11.1 Administration: ZFS File Systems.