Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Using Files in a ZFS Storage Pool
Considerations for ZFS Storage Pools
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Unavailable Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Designating Hot Spares in Your Storage Pool
Activating and Deactivating Hot Spares in Your Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
Occasionally, you might need to move a storage pool between systems. To do so, the storage devices must be disconnected from the original system and reconnected to the destination system. This task can be accomplished by physically recabling the devices, or by using multiported devices such as the devices on a SAN. ZFS enables you to export the pool from one system and import it on the destination system, even if the systems are of different architectural endianness. For information about replicating or migrating file systems between different storage pools, which might reside on different systems, see Sending and Receiving ZFS Data.
Storage pools should be explicitly exported to indicate that they are ready to be migrated. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all information about the pool from the system.
If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear UNAVAIL on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
To export a pool, use the zpool export command. For example:
# zpool export tank
The command attempts to unmount any mounted file systems within the pool before continuing. If any of the file systems fail to unmount, you can forcefully unmount them by using the -f option. For example:
# zpool export tank cannot unmount '/export/home/eric': Device busy # zpool export -f tank
After this command is executed, the pool tank is no longer visible on the system.
If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as “potentially active.”
If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.
For more information about ZFS volumes, see ZFS Volumes.
After the pool has been removed from the system (either through an explicit export or by forcefully removing the devices), you can attach the devices to the target system. ZFS can handle some situations in which only some of the devices are available, but a successful pool migration depends on the overall health of the devices. In addition, the devices do not necessarily have to be attached under the same device name. ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. For example:
# zpool import pool: tank id: 11809215114195894163 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE
In this example, the pool tank is available to be imported on the target system. Each pool is identified by a name as well as a unique numeric identifier. If multiple pools with the same name are available to import, you can use the numeric identifier to distinguish between them.
Similar to the zpool status command output, the zpool import output includes a link to a knowledge article with the most up-to-date information regarding repair procedures for the problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but sufficient redundant data exists to provide a usable pool, the pool appears in the DEGRADED state. For example:
# zpool import pool: tank id: 4715259469716913940 state: DEGRADED status: One or more devices are unavailable. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: tank DEGRADED mirror-0 DEGRADED c0t5000C500335E106Bd0 ONLINE c0t5000C500335FC3E7d0 UNAVAIL cannot open device details: c0t5000C500335FC3E7d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing.
In this example, the first disk is damaged or missing, though you can still import the pool because the mirrored data is still accessible. If too many unavailable devices are present, the pool cannot be imported.
In this example, two disks are missing from a RAID-Z virtual device, which means that sufficient redundant data is not available to reconstruct the pool. In some cases, not enough devices are present to determine the complete configuration. In this case, ZFS cannot determine what other devices were part of the pool, though ZFS does report as much information as possible about the situation. For example:
# zpool import pool: mothership id: 3702878663042245922 state: UNAVAIL status: One or more devices are unavailable. action: The pool cannot be imported due to unavailable devices or data. config: mothership UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas c8t0d0 UNAVAIL cannot open c8t1d0 UNAVAIL cannot open c8t2d0 ONLINE c8t3d0 ONLINE device details: c8t0d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. c8t1d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing.
By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search alternate directories. For example:
# zpool create dozer mirror /file/a /file/b # zpool export dozer # zpool import -d /file pool: dozer id: 7318163511366751416 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE mirror-0 ONLINE /file/a ONLINE /file/b ONLINE # zpool import -d /file dozer
If devices exist in multiple directories, you can specify multiple -d options.
After a pool has been identified for import, you can import it by specifying the name of the pool or its numeric identifier as an argument to the zpool import command. For example:
# zpool import tank
If multiple available pools have the same name, you must specify which pool to import by using the numeric identifier. For example:
# zpool import pool: dozer id: 2704475622193776801 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t9d0 ONLINE pool: dozer id: 6223921996155991199 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t8d0 ONLINE # zpool import dozer cannot import 'dozer': more than one matching pool import by numeric ID instead # zpool import 6223921996155991199
If the pool name conflicts with an existing pool name, you can import the pool under a different name. For example:
# zpool import dozer zeepool
This command imports the exported pool dozer using the new name zeepool. The new pool name is persistent.
If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:
# zpool import dozer cannot import 'dozer': pool may be in use on another system use '-f' to import anyway # zpool import -f dozer
Note - Do not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different hosts.
Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.
By default, a pool with a missing log device cannot be imported. You can use zpool import -m command to force a pool to be imported with a missing log device. For example:
# zpool import dozer pool: dozer id: 16216589278751424645 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://support.oracle.com/msg/ZFS-8000-6X config: dozer UNAVAIL missing device mirror-0 ONLINE c8t0d0 ONLINE c8t1d0 ONLINE device details: missing-1 UNAVAIL corrupted data status: ZFS detected errors on this device. The device has bad label or disk contents. Additional devices are known to be part of this pool, though their exact configuration cannot be determined.
Import the pool with the missing log device. For example:
# zpool import -m dozer # zpool status dozer pool: dozer state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: none requested config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 logs 2189413556875979854 UNAVAIL 0 0 0 errors: No known data errors
After attaching the missing log device, run the zpool clear command to clear the pool errors.
A similar recovery can be attempted with missing mirrored log devices. For example:
After attaching the missing log devices, run the zpool clear command to clear the pool errors.
You can import a pool in read-only mode. If a pool is so damaged that it cannot be accessed, this feature might enable you to recover the pool's data. For example:
# zpool import -o readonly=on tank # zpool scrub tank cannot scrub tank: pool is read-only
When a pool is imported in read-only mode, the following conditions apply:
All file systems and volumes are mounted in read-only mode.
Pool transaction processing is disabled. This also means that any pending synchronous writes in the intent log are not played until the pool is imported read-write.
Attempts to set a pool property during the read-only import are ignored.
A read-only pool can be set back to read-write mode by exporting and importing the pool. For example:
# zpool export tank # zpool import tank # zpool scrub tank
The following command imports the pool dpool by identifying one of the pool's specific devices, /dev/dsk/c2t3d0, in this example.
# zpool import -d /dev/dsk/c2t3d0s0 dpool # zpool status dpool pool: dpool state: ONLINE scan: resilvered 952K in 0h0m with 0 errors on Fri Jun 29 16:22:06 2012 config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0
Even though this pool is comprised of whole disks, the command must include the specific device's slice identifier.
You can use the zpool import -D command to recover a storage pool that has been destroyed. For example:
# zpool destroy tank # zpool import -D pool: tank id: 5154272182900538157 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE
In this zpool import output, you can identify the tank pool as the destroyed pool because of the following state information:
state: ONLINE (DESTROYED)
To recover the destroyed pool, run the zpool import -D command again with the pool to be recovered. For example:
# zpool import -D tank # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE errors: No known data errors
If one of the devices in the destroyed pool is unavailable, you might be able to recover the destroyed pool anyway by including the -f option. In this scenario, you would import the degraded pool and then attempt to fix the device failure. For example:
# zpool destroy dozer # zpool import -D pool: dozer id: 4107023015970708695 state: DEGRADED (DESTROYED) status: One or more devices are unavailable. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: dozer DEGRADED raidz2-0 DEGRADED c8t0d0 ONLINE c8t1d0 ONLINE c8t2d0 ONLINE c8t3d0 UNAVAIL cannot open c8t4d0 ONLINE device details: c8t3d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. # zpool import -Df dozer # zpool status -x pool: dozer state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: none requested config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 4881130428504041127 UNAVAIL 0 0 0 c8t4d0 ONLINE 0 0 0 errors: No known data errors # zpool online dozer c8t4d0 # zpool status -x all pools are healthy