Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Using Files in a ZFS Storage Pool
Considerations for ZFS Storage Pools
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Unavailable Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. After a pool has been created, you can perform several tasks to manage the physical devices within the pool.
You can dynamically add disk space to a pool by adding a new top-level virtual device. This disk space is immediately available to all datasets in the pool. To add a new virtual device to a pool, use the zpool add command. For example:
# zpool add zeepool mirror c2t1d0 c2t2d0
The format for specifying the virtual devices is the same as for the zpool create command. Devices are checked to determine if they are in use, and the command cannot change the level of redundancy without the -f option. The command also supports the -n option so that you can perform a dry run. For example:
# zpool add -n zeepool mirror c3t1d0 c3t2d0 would update 'zeepool' to the following configuration: zeepool mirror c1t0d0 c1t1d0 mirror c2t1d0 c2t2d0 mirror c3t1d0 c3t2d0
This command syntax would add mirrored devices c3t1d0 and c3t2d0 to the zeepool pool's existing configuration.
For more information about how virtual device validation is done, see Detecting In-Use Devices.
Example 3-1 Adding Disks to a Mirrored ZFS Configuration
In the following example, another mirror is added to an existing mirrored ZFS configuration.
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors # zpool add tank mirror c0t3d0 c1t3d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors
Example 3-2 Adding Disks to a RAID-Z Configuration
Additional disks can be added similarly to a RAID-Z configuration. The following example shows how to convert a storage pool with one RAID-Z device that contains three disks to a storage pool with two RAID-Z devices that contains three disks each.
# zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool raidz c2t2d0 c2t3d0 c2t4d0 # zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 errors: No known data errors
Example 3-3 Adding and Removing a Mirrored Log Device
The following example shows how to add a mirrored log device to a mirrored storage pool.
# zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors # zpool add newpool log mirror c0t6d0 c0t7d0 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 errors: No known data errors
You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in an unmirrored storage pool.
You can remove log devices by using the zpool remove command. The mirrored log device in the previous example can be removed by specifying the mirror-1 argument. For example:
# zpool remove newpool mirror-1 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors
If your pool configuration contains only one log device, you remove the log device by specifying the device name. For example:
# zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 logs c0t10d0 ONLINE 0 0 0 errors: No known data errors # zpool remove pool c0t10d0
Example 3-4 Adding and Removing Cache Devices
You can add cache devices to your ZFS storage pool and remove them if they are no longer required.
Use the zpool add command to add cache devices. For example:
# zpool add tank cache c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 cache c2t5d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 errors: No known data errors
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
Use the zpool remove command to remove cache devices. For example:
# zpool remove tank c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors
Currently, the zpool remove command only supports removing hot spares, log devices, and cache devices. Devices that are part of the main mirrored pool configuration can be removed by using the zpool detach command. Nonredundant and RAID-Z devices cannot be removed from a pool.
For more information about using cache devices in a ZFS storage pool, see Creating a ZFS Storage Pool With Cache Devices.
In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or nonmirrored device.
If you are attaching a disk to create a mirrored root pool, see How to Configure a Mirrored Root Pool (SPARC or x86/VTOC).
If you are replacing a disk in a ZFS root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).
Example 3-5 Converting a Two-Way Mirrored Storage Pool to a Three-way Mirrored Storage Pool
In this example, zeepool is an existing two-way mirror that is converted to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.
# zpool status zeepool pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach zeepool c1t1d0 c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 12:59:20 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 592K resilvered errors: No known data errors
If the existing device is part of a three-way mirror, attaching the new device creates a four-way mirror, and so on. Whatever the case, the new device begins to resilver immediately.
Example 3-6 Converting a Nonredundant ZFS Storage Pool to a Mirrored ZFS Storage Pool
In addition, you can convert a nonredundant storage pool to a redundant storage pool by using the zpool attach command. For example:
# zpool create tank c0t1d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach tank c0t1d0 c1t1d0 # zpool status tank pool: tank state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 14:28:23 2010 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 73.5K resilvered errors: No known data errors
You can use the zpool detach command to detach a device from a mirrored storage pool. For example:
# zpool detach zeepool c2t1d0
However, this operation fails if no other valid replicas of the data exist. For example:
# zpool detach newpool c1t2d0 cannot detach c1t2d0: only applicable to mirror and replacing vdevs
A mirrored ZFS storage pool can be quickly cloned as a backup pool by using the zpool split command. You can use this feature to split a mirrored root pool, but the pool that is split off is not bootable until you perform some additional steps.
You can use the zpool split command to detach one or more disks from a mirrored ZFS storage pool to create a new pool with the detached disk or disks. The new pool will have identical contents to the original mirrored ZFS storage pool.
By default, a zpool split operation on a mirrored pool detaches the last disk for the newly created pool. After the split operation, you then import the new pool. For example:
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors # zpool split tank tank2 # zpool import tank2 # zpool status tank tank2 pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 errors: No known data errors pool: tank2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank2 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors
You can identify which disk should be used for the newly created pool by specifying it with the zpool split command. For example:
# zpool split tank tank2 c1t0d0
Before the actual split operation occurs, data in memory is flushed to the mirrored disks. After the data is flushed, the disk is detached from the pool and given a new pool GUID. A new pool GUID is generated so that the pool can be imported on the same system on which it was split.
If the pool to be split has non-default file system mount points, and the new pool is created on the same system, then you must use the zpool split -R option to identify an alternate root directory for the new pool so that any existing mount points do not conflict. For example:
# zpool split -R /tank2 tank tank2
If you don't use the zpool split -R option, and you can see that mount points conflict when you attempt to import the new pool, import the new pool with the -R option. If the new pool is created on a different system, then specifying an alternate root directory is not necessary unless mount point conflicts occur.
Review the following considerations before using the zpool split feature:
This feature is not available for a RAID-Z configuration or a non-redundant pool of multiple disks.
Data and application operations should be quiesced before attempting a zpool split operation.
A pool cannot be split if resilvering is in process.
Splitting a mirrored pool is optimal when the pool contains two to three disks, where the last disk in the original pool is used for the newly created pool. Then, you can use the zpool attach command to re-create your original mirrored storage pool or convert your newly created pool into a mirrored storage pool. No method currently exists to create a new mirrored pool from an existing mirrored pool in one zpool split operation because the new (split) pool is non-redundant
If the existing pool is a three-way mirror, then the new pool will contain one disk after the split operation. If the existing pool is a two-way mirror of two disks, then the outcome is two non-redundant pools of two disks. You must attach two additional disks to convert the non-redundant pools to mirrored pools.
A good way to keep your data redundant during a split operation is to split a mirrored storage pool that contains three disks so that the original pool contains two mirrored disks after the split operation.
Confirm that your hardware is configured correctly before splitting a mirrored pool. For related information about confirming your hardware's cache flush settings, see General System Practices.
Example 3-7 Splitting a Mirrored ZFS Pool
In the following example, a mirrored storage pool called mothership, with three disks is split. The two resulting pools are the mirrored pool mothership, with two disks and the new pool, luna, with one disk. Each pool has identical content.
The pool luna could be imported on another system for backup purposes. After the backup is complete, the pool luna could be destroyed and the disk reattached to the mothership. Then, the process could be repeated.
# zpool status mothership pool: mothership state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mothership ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 errors: No known data errors # zpool split mothership luna # zpool import luna # zpool status mothership luna pool: luna state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM luna ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 errors: No known data errors pool: mothership state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mothership ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 errors: No known data errors
ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read data from or write data to the device, assuming the condition is only temporary. If the condition is not temporary, you can instruct ZFS to ignore the device by taking it offline. ZFS does not send any requests to an offline device.
Note - Devices do not need to be taken offline in order to replace them.
You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:
# zpool offline tank c0t5000C500335F95E3d0
Consider the following points when taking a device offline:
You cannot take a pool offline to the point where it becomes UNAVAIL. For example, you cannot take offline two devices in a raidz1 configuration, nor can you take offline a top-level virtual device.
# zpool offline tank c0t5000C500335F95E3d0 cannot offline c0t5000C500335F95E3d0: no valid replicas
By default, the OFFLINE state is persistent. The device remains offline when the system is rebooted.
To temporarily take a device offline, use the zpool offline -t option. For example:
# zpool offline -t tank c1t0d0 bringing device 'c1t0d0' offline
When the system is rebooted, this device is automatically returned to the ONLINE state.
When a device is taken offline, it is not detached from the storage pool. If you attempt to use the offline device in another pool, even after the original pool is destroyed, you see a message similar to the following:
device is part of exported or potentially active ZFS pool. Please see zpool(1M)
If you want to use the offline device in another storage pool after destroying the original storage pool, first bring the device online, then destroy the original storage pool.
Another way to use a device from another storage pool, while keeping the original storage pool, is to replace the existing device in the original storage pool with another comparable device. For information about replacing devices, see Replacing Devices in a Storage Pool.
Offline devices are in the OFFLINE state when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.
For more information on device health, see Determining the Health Status of ZFS Storage Pools.
After a device is taken offline, it can be brought online again by using the zpool online command. For example:
# zpool online tank c0t5000C500335F95E3d0
When a device is brought online, any data that has been written to the pool is resynchronized with the newly available device. Note that you cannot bring a device online to replace a disk. If you take a device offline, replace the device, and try to bring it online, it remains in the UNAVAIL state.
If you attempt to bring online an UNAVAIL device, a message similar to the following is displayed:
You might also see the faulted disk message displayed on the console or written to the /var/adm/messages file. For example:
SUNW-MSG-ID: ZFS-8000-LR, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Wed Jun 20 11:35:26 MDT 2012 PLATFORM: ORCL,SPARC-T3-4, CSN: 1120BDRCCD, HOSTNAME: tardis SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: fb6699c8-6bfb-eefa-88bb-81479182e3b7 DESC: ZFS device 'id1,sd@n5000c500335dc60f/a' in pool 'pond' failed to open. AUTO-RESPONSE: An attempt will be made to activate a hot spare if available. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -lx' for more information. Please refer to the associated reference document at http://support.oracle.com/msg/ZFS-8000-LR for the latest service procedures and policies regarding this diagnosis.
For more information about replacing a faulted device, see Resolving a Missing Device.
You can use the zpool online -e command to expand a LUN. By default, a LUN that is added to a pool is not expanded to its full size unless the autoexpand pool property is enabled. You can expand the LUN automatically by using the zpool online -e command even if the LUN is already online or if the LUN is currently offline. For example:
# zpool online -e tank c0t5000C500335F95E3d0
If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.
If specified with no arguments, this command clears all device errors within the pool. For example:
# zpool clear tank
If one or more devices are specified, this command only clears errors associated with the specified devices. For example:
# zpool clear tank c0t5000C500335F95E3d0
For more information about clearing zpool errors, see Clearing Transient Errors.
You can replace a device in a storage pool by using the zpool replace command.
If you are physically replacing a device with another device in the same location in a redundant pool, then you might only need to identify the replaced device. On some hardware, ZFS recognizes that the device is a different disk in the same location. For example, to replace a failed disk (c1t1d0) by removing the disk and replacing it in the same location, use the following syntax:
# zpool replace tank c1t1d0
If you are replacing a device in a storage pool with a disk in a different physical location, you must specify both devices. For example:
# zpool replace tank c1t1d0 c1t2d0
If you are replacing a disk in the ZFS root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).
The following are the basic steps for replacing a disk:
Offline the disk, if necessary, with the zpool offline command.
Remove the disk to be replaced.
Insert the replacement disk.
Review the format output to determine if the replacement disk is visible.
In addition, check to see whether the device ID has changed. If the replacement disk has a WWN, then the device ID for the failed disk is changed.
Let ZFS know the disk is replaced. For example:
# zpool replace tank c1t1d0
If the replacement disk has a different device ID as identified above, include the new device ID.
# zpool replace tank c0t5000C500335FC3E7d0 c0t5000C500335BA8C3d0
Bring the disk online with the zpool online command, if necessary.
Notify FMA that the device is replaced.
From fmadm faulty output, identify the zfs://pool=name/vdev=guid string in the Affects: section and provide that string as an argument to the fmadm repaired command.
# fmadm faulty # fmadm repaired zfs://pool=name/vdev=guid
On some systems with SATA disks, you must unconfigure a disk before you can take it offline. If you are replacing a disk in the same slot position on this system, then you can just run the zpool replace command as described in the first example in this section.
For an example of replacing a SATA disk, see Example 10-1.
Consider the following when replacing devices in a ZFS storage pool:
If you set the autoreplace pool property to on, then any new device found in the same physical location as a device that previously belonged to the pool is automatically formatted and replaced. You are not required to use the zpool replace command when this property is enabled. This feature might not be available on all hardware types.
The storage pool state REMOVED is provided when a device or hot spare has been physically removed while the system was running. A hot spare device is substituted for the removed device, if available.
If a device is removed and then reinserted, the device is placed online. If a hot spare was activated when the device was reinserted, the hot spare is removed when the online operation completes.
Automatic detection when devices are removed or inserted is hardware-dependent and might not be supported on all platforms. For example, USB devices are automatically configured upon insertion. However, you might have to use the cfgadm -c configure command to configure a SATA drive.
Hot spares are checked periodically to ensure that they are online and available.
The size of the replacement device must be equal to or larger than the smallest disk in a mirrored or RAID-Z configuration.
When a replacement device that is larger in size than the device it is replacing is added to a pool, it is not automatically expanded to its full size. The autoexpand pool property value determines whether a replacement LUN is expanded to its full size when the disk is added to the pool. By default, the autoexpand property is disabled. You can enable this property to expand the LUN size before or after the larger LUN is added to the pool.
In the following example, two 16-GB disks in a mirrored pool are replaced with two 72-GB disks. Ensure that the first device is completely resilvered before attempting the second device replacement. The autoexpand property is enabled after the disk replacements to expand the full disk sizes.
# zpool create pool mirror c1t16d0 c1t17d0 # zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t16d0 ONLINE 0 0 0 c1t17d0 ONLINE 0 0 0 zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 16.8G 76.5K 16.7G 0% ONLINE - # zpool replace pool c1t16d0 c1t1d0 # zpool replace pool c1t17d0 c1t2d0 # zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 16.8G 88.5K 16.7G 0% ONLINE - # zpool set autoexpand=on pool # zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 68.2G 117K 68.2G 0% ONLINE -
Replacing many disks in a large pool is time-consuming due to resilvering the data onto the new disks. In addition, you might consider running the zpool scrub command between disk replacements to ensure that the replacement devices are operational and that the data is written correctly.
If a failed disk has been replaced automatically with a hot spare, then you might need to detach the spare after the failed disk is replaced. You can use the zpool detach command to detach a spare in a mirrored or RAID-Z pool. For information about detaching a hot spare, see Activating and Deactivating Hot Spares in Your Storage Pool.
For more information about replacing devices, see Resolving a Missing Device and Replacing or Repairing a Damaged Device.
The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in a storage pool. Designating a device as a hot spare means that the device is not an active device in the pool, but if an active device in the pool fails, the hot spare automatically replaces the failed device.
Devices can be designated as hot spares in the following ways:
When the pool is created with the zpool create command.
After the pool is created with the zpool add command.
The following example shows how to designate devices as hot spares when the pool is created:
# zpool create zeepool mirror c0t5000C500335F95E3d0 c0t5000C500335F907Fd0 mirror c0t5000C500335BD117d0 c0t5000C500335DC60Fd0 spare c0t5000C500335E106Bd0 c0t5000C500335FC3E7d0 # zpool status zeepool pool: zeepool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 spares c0t5000C500335E106Bd0 AVAIL c0t5000C500335FC3E7d0 AVAIL errors: No known data errors
The following example shows how to designate hot spares by adding them to a pool after the pool is created:
# zpool add zeepool spare c0t5000C500335E106Bd0 c0t5000C500335FC3E7d0 # zpool status zeepool pool: zeepool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 spares c0t5000C500335E106Bd0 AVAIL c0t5000C500335FC3E7d0 AVAIL errors: No known data errors
Hot spares can be removed from a storage pool by using the zpool remove command. For example:
# zpool remove zeepool c0t5000C500335FC3E7d0 # zpool status zeepool pool: zeepool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 spares c0t5000C500335E106Bd0 AVAIL errors: No known data errors
A hot spare cannot be removed if it is currently used by a storage pool.
Consider the following when using ZFS hot spares:
Currently, the zpool remove command can only be used to remove hot spares, cache devices, and log devices.
To add a disk as a hot spare, the hot spare must be equal to or larger than the size of the largest disk in the pool. Adding a smaller disk as a spare to a pool is allowed. However, when the smaller spare disk is activated, either automatically or with the zpool replace command, the operation fails with an error similar to the following:
cannot replace disk3 with disk4: device is too small
You cannot share a spare across systems.
Consider that if you share a spare between two data pools on the same system, you must coordinate the use of the spare between the two pools. For example, pool A has the spare in use and pool A is exported. Pool B could unknowingly use the spare while pool A is exported. When pool A is imported, data corruption could occur because both pools are using the same disk.
Do not share a spare between a root pool and a data pool.
Hot spares are activated in the following ways:
Manual replacement – You replace a failed device in a storage pool with a hot spare by using the zpool replace command.
Automatic replacement – When a fault is detected, an FMA agent examines the pool to determine if it has any available hot spares. If so, it replaces the faulted device with an available spare.
If a hot spare that is currently in use fails, the FMA agent detaches the spare and thereby cancels the replacement. The agent then attempts to replace the device with another hot spare, if one is available. This feature is currently limited by the fact that the ZFS diagnostic engine only generates faults when a device disappears from the system.
If you physically replace a failed device with an active spare, you can reactivate the original device by using the zpool detach command to detach the spare. If you set the autoreplace pool property to on, the spare is automatically detached and returned to the spare pool when the new device is inserted and the online operation completes.
An UNAVAIL device is automatically replaced if a hot spare is available. For example:
# zpool status -x pool: zeepool state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 16:46:19 2012 config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 DEGRADED 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 spare-1 DEGRADED 449 0 0 c0t5000C500335DC60Fd0 UNAVAIL 0 0 0 c0t5000C500335E106Bd0 ONLINE 0 0 0 spares c0t5000C500335E106Bd0 INUSE errors: No known data errors
Currently, you can deactivate a hot spare in the following ways:
By removing the hot spare from the storage pool.
By detaching a hot spare after a failed disk is physically replaced. See Example 3-8.
By temporarily or permanently swapping in another hot spare. See Example 3-9.
Example 3-8 Detaching a Hot Spare After the Failed Disk Is Replaced
In this example, the failed disk (c0t5000C500335DC60Fd0) is physically replaced and ZFS is notified by using the zpool replace command.
# zpool replace zeepool c0t5000C500335DC60Fd0 # zpool status zeepool pool: zeepool state: ONLINE scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 16:53:43 2012 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 spares c0t5000C500335E106Bd0 AVAIL
If necessary, you can use the zpool detach command to return the hot spare back to the spare pool. For example:
# zpool detach zeepool c0t5000C500335E106Bd0
Example 3-9 Detaching a Failed Disk and Using the Hot Spare
If you want to replace a failed disk by temporarily or permanently swapping in the hot spare that is currently replacing it, then detach the original (failed) disk. If the failed disk is eventually replaced, then you can add it back to the storage pool as a spare. For example:
# zpool status zeepool pool: zeepool state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: scrub in progress since Thu Jun 21 17:01:49 2012 1.07G scanned out of 6.29G at 220M/s, 0h0m to go 0 repaired, 17.05% done config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 DEGRADED 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 UNAVAIL 0 0 0 spares c0t5000C500335E106Bd0 AVAIL errors: No known data errors # zpool detach zeepool c0t5000C500335DC60Fd0 # zpool status zeepool pool: zeepool state: ONLINE scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 17:02:35 2012 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335E106Bd0 ONLINE 0 0 0 errors: No known data errors (Original failed disk c0t5000C500335DC60Fd0 is physically replaced) # zpool add zeepool spare c0t5000C500335DC60Fd0 # zpool status zeepool pool: zeepool state: ONLINE scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 17:02:35 2012 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335E106Bd0 ONLINE 0 0 0 spares c0t5000C500335DC60Fd0 AVAIL errors: No known data errors
After a disk is replaced and the spare is detached, let FMA know that the disk is repaired.
# fmadm faulty # fmadm repaired zfs://pool=name/vdev=guid