Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Managing ZFS Root Pool Components
Managing ZFS Root Pool Components (Overview)
ZFS Root Pool Space Requirements
ZFS Root Pool Configuration Requirements
How to Update Your ZFS Boot Environment
How to Configure a Mirrored Root Pool (SPARC or x86/VTOC)
How to Configure a Mirrored Root Pool (x86/EFI (GPT))
How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC)
How to Replace a Disk in a ZFS Root Pool (SPARC or x86/EFI (GPT))
How to Create a BE in Another Root Pool (SPARC or x86/VTOC)
How to Create a BE in Another Root Pool (SPARC or x86/EFI (GPT))
Managing Your ZFS Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap and Dump Devices
Troubleshooting ZFS Dump Device Issues
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
Booting From a ZFS Root File System on a SPARC Based System
Booting From a ZFS Root File System on an x86 Based System
Booting For Recovery Purposes in a ZFS Root Environment
How to Boot the System For Recovery Purposes
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
The following sections provide information about installing and updating a ZFS root pool and configuring a mirrored root pool.
The Oracle Solaris 11 Live CD installation method installs a default ZFS root pool on a single disk. With the Oracle Solaris 11 automated installation (AI) method, you can create an AI manifest to identify the disk or mirrored disks for the ZFS root pool.
The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target disk that you identify. You can specify the logical device, such as c1t0d0, or the physical device path. In addition, you can use the MPxIO identifier or the device ID for the device to be installed.
After the installation, review your ZFS storage pool and file system information, which can vary by installation type and customizations. For example:
# zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 11.8G 55.1G 4.58M /rpool rpool/ROOT 3.57G 55.1G 31K legacy rpool/ROOT/solaris 3.57G 55.1G 3.40G / rpool/ROOT/solaris/var 165M 55.1G 163M /var rpool/VARSHARE 42.5K 55.1G 42.5K /var/share rpool/dump 6.19G 55.3G 6.00G - rpool/export 63K 55.1G 32K /export rpool/export/home 31K 55.1G 31K /export/home rpool/swap 2.06G 55.2G 2.00G -
Review your ZFS BE information. For example:
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 3.75G static 2012-07-20 12:10
In the above output, the Active field indicates whether the BE is active now represented by N and active on reboot represented by R, or both represented by NR.
The default ZFS boot environment (BE) is named solaris by default. You can identify your BEs by using the beadm list command. For example:
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 3.82G static 2012-07-19 13:44
In the above output, NR means the BE is active now and will be the active BE on reboot.
You can use the pkg update command to update your ZFS boot environment. If you update your ZFS BE by using the pkg update command, a new BE is created and activated automatically, unless the updates to the existing BE are very minimal.
# pkg update DOWNLOAD PKGS FILES XFER (MB) Completed 707/707 10529/10529 194.9/194.9 . . .
A new BE, solaris-1, is created automatically and activated.
You can also create and activate a backup BE outside of the update process.
# beadm create solaris-1 # beadm activate solaris-1
# init 6 . . . # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 46.95M static 2012-07-20 10:25 solaris-1 NR / 3.82G static 2012-07-19 14:45
# beadm activate solaris # init 6
You might need to copy or access a file from another BE for recovery purposes.
# beadm mount solaris-1 /mnt
# ls /mnt bin export media pkg rpool tmp boot home mine platform sbin usr dev import mnt proc scde var devices java net project shared doe kernel nfs4 re src etc lib opt root system
# beadm umount solaris-1
If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.
For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 errors: No known data errors
SPARC: Confirm that the disk has an SMI (VTOC) disk label and a slice 0. If you need to relabel the disk and create a slice 0, see How to Create a Disk Slice for a ZFS Root File System in Oracle Solaris 11.1 Administration: Devices and File Systems.
x86: Confirm that the disk has an fdisk partition, an SMI disk label, and a slice 0. If you need to repartition the disk and create a slice 0, see Preparing a Disk for a ZFS Root File System in Oracle Solaris 11.1 Administration: Devices and File Systems.
# zpool attach rpool c2t0d0s0 c2t1d0s0 Make sure to wait until resilver is done before rebooting.
The correct disk labeling and the boot blocks are applied automatically.
# zpool status rpool # zpool status rpool pool: rpool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function in a degraded state. action: Wait for the resilver to complete. Run 'zpool status -v' to see device specific details. scan: resilver in progress since Fri Jul 20 13:39:53 2012 938M scanned out of 11.7G at 46.9M/s, 0h3m to go 938M resilvered, 7.86% done config: NAME STATE READ WRITE CKSUM rpool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c2t0d0s0 ONLINE 0 0 0 c2t1d0s0 DEGRADED 0 0 0 (resilvering)
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2012
Determine the existing rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 29.8G 152K 29.7G 0% 1.00x ONLINE -
# zpool set autoexpand=on rpool
Review the expanded rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 279G 146K 279G 0% 1.00x ONLINE -
The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.
If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.
For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).
# zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 errors: No known data errors
# zpool attach rpool c2t0d0 c2t1d0 Make sure to wait until resilver is done before rebooting.
The correct disk labeling and the boot blocks are applied automatically.
If you have customized partitions on your root pool disk, then you might need syntax similar to the following:
# zpool attach rpool c2t0d0s0 c2t1d0
# zpool status rpool pool: rpool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function in a degraded state. action: Wait for the resilver to complete. Run 'zpool status -v' to see device specific details. scan: resilver in progress since Fri Jul 20 13:52:05 2012 809M scanned out of 11.6G at 44.9M/s, 0h4m to go 776M resilvered, 6.82% done config: NAME STATE READ WRITE CKSUM rpool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 DEGRADED 0 0 0 (resilvering) errors: No known data errors
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2012
Determine the existing rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 29.8G 152K 29.7G 0% 1.00x ONLINE -
# zpool set autoexpand=on rpool
Review the expanded rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 279G 146K 279G 0% 1.00x ONLINE -
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
If you use the zpool replace command to replace a disk in a root pool disk, you will need to apply the boot blocks manually.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.
Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 <Confirm that the new disk has an SMI label and a slice 0> # zpool online rpool c1t0d0s0 # zpool replace rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> # bootadm install-bootloader
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
SPARC: Confirm that the replacement (new) disk has an SMI (VTOC) label and a slice 0. For information about relabeling a disk that is intended for the root pool, see How to Label a Disk in Oracle Solaris 11.1 Administration: Devices and File Systems.
x86: Confirm that the disk has an fdisk partition, an SMI disk label, and a slice 0. If you need to repartition the disk and create a slice 0, see How to Set Up a Disk for a ZFS Root File System in Oracle Solaris 11.1 Administration: Devices and File Systems.
For example:
# zpool attach rpool c2t0d0s0 c2t1d0s0 Make sure to wait until resilver is done before rebooting.
The correct disk labeling and the boot blocks are applied automatically.
For example:
# zpool status rpool pool: rpool state: ONLINE scan: resilvered 11.7G in 0h5m with 0 errors on Fri Jul 20 13:45:37 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0s0 ONLINE 0 0 0 c2t1d0s0 ONLINE 0 0 0 errors: No known data errors
For example, on a SPARC based system:
ok boot /pci@1f,700000/scsi@2/disk@1,0
Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c2t0d0s0) is:
/pci@1f,700000/scsi@2/disk@0,0
In the example below, the replacement boot disk is (c2t1d0s0):
boot /pci@1f,700000/scsi@2/disk@1,0
For example:
# zpool detach rpool c2t0d0s0
# zpool set autoexpand=on rpool
Or, expand the device:
# zpool online -e c2t1d0s0
SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86: Reconfigure the system BIOS.
The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
If you use the zpool replace command to replace a disk in a root pool disk, you will need to apply the boot blocks manually.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.
Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 # zpool online rpool c1t0d0 # zpool replace rpool c1t0d0 # zpool status rpool <Let disk resilver before installing the boot blocks> x86# bootadm install-bootloader
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
For example:
# zpool attach rpool c2t0d0 c2t1d0 Make sure to wait until resilver is done before rebooting.
The correct disk labeling and the boot blocks are applied automatically.
For example:
# zpool status rpool pool: rpool state: ONLINE scan: resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 12:06:07 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 errors: No known data errors
For example:
# zpool detach rpool c2t0d0
# zpool set autoexpand=on rpool
Or, expand the device:
# zpool online -e c2t1d0
Reconfigure the system BIOS.
If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.
After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.
# zpool create rpool2 c4t2d0s0
# beadm create -p rpool2 solaris2
# zpool set bootfs=rpool2/ROOT/solaris2 rpool2
# beadm activate solaris2
ok boot disk2
Your system should be running under the new BE.
# zfs create -V 4g rpool2/swap
/dev/zvol/dsk/rpool2/swap - - swap - no -
# zfs create -V 4g rpool2/dump
# dumpadm -d /dev/zvol/dsk/rpool2/dump
SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86 – Reconfigure the system BIOS.
# init 6
The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.
If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.
After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.
# zpool create -B rpool2 c2t2d0
Or, create a mirrored alternate root pool. For example:
# zpool create -B rpool2 mirror c2t2d0 c2t3d0
# beadm create -p rpool2 solaris2
# bootadm install-bootloader -P rpool2
# zpool set bootfs=rpool2/ROOT/solaris2 rpool2
# beadm activate solaris2
SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.
x86 – Reconfigure the system BIOS.
Your system should be running under the new BE.
# zfs create -V 4g rpool2/swap
/dev/zvol/dsk/rpool2/swap - - swap - no -
# zfs create -V 4g rpool2/dump
# dumpadm -d /dev/zvol/dsk/rpool2/dump
# init 6