Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Using Files in a ZFS Storage Pool
Considerations for ZFS Storage Pools
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Unavailable Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Designating Hot Spares in Your Storage Pool
Activating and Deactivating Hot Spares in Your Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
ZFS provides data redundancy, as well as self-healing properties, in mirrored and RAID-Z configurations.
A mirrored storage pool configuration requires at least two disks, preferably on separate controllers. Many disks can be used in a mirrored configuration. In addition, you can create more than one mirror in each pool. Conceptually, a basic mirrored configuration would look similar to the following:
mirror c1t0d0 c2t0d0
Conceptually, a more complex mirrored configuration would look similar to the following:
mirror c1t0d0 c2t0d0 c3t0d0 mirror c4t0d0 c5t0d0 c6t0d0
For information about creating a mirrored storage pool, see Creating a Mirrored Storage Pool.
In addition to a mirrored storage pool configuration, ZFS provides a RAID-Z configuration with either single-, double-, or triple-parity fault tolerance. Single-parity RAID-Z (raidz or raidz1) is similar to RAID-5. Double-parity RAID-Z (raidz2) is similar to RAID-6.
For more information about RAIDZ-3 (raidz3), see the following blog:
http://blogs.oracle.com/ahl/entry/triple_parity_raid_z
All traditional RAID-5-like algorithms (RAID-4, RAID-6, RDP, and EVEN-ODD, for example) might experience a problem known as the RAID-5 write hole. If only part of a RAID-5 stripe is written, and power is lost before all blocks have been written to disk, the parity will remain unsynchronized with the data, and therefore forever useless, (unless a subsequent full-stripe write overwrites it). In RAID-Z, ZFS uses variable-width RAID stripes so that all writes are full-stripe writes. This design is only possible because ZFS integrates file system and device management in such a way that the file system's metadata has enough information about the underlying data redundancy model to handle variable-width RAID stripes. RAID-Z is the world's first software-only solution to the RAID-5 write hole.
A RAID-Z configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration, and so on. For example, if you have three disks in a single-parity RAID-Z configuration, parity data occupies disk space equal to one of the three disks. Otherwise, no special hardware is required to create a RAID-Z configuration.
Conceptually, a RAID-Z configuration with three disks would look similar to the following:
raidz c1t0d0 c2t0d0 c3t0d0
Conceptually, a more complex RAID-Z configuration would look similar to the following:
raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0 c10t0d0 c11t0d0c12t0d0 c13t0d0 c14t0d0
If you are creating a RAID-Z configuration with many disks, consider splitting the disks into multiple groupings. For example, a RAID-Z configuration with 14 disks is better split into two 7-disk groupings. RAID-Z configurations with single-digit groupings of disks should perform better.
For information about creating a RAID-Z storage pool, see Creating a RAID-Z Storage Pool.
For more information about choosing between a mirrored configuration or a RAID-Z configuration based on performance and disk space considerations, see the following blog entry:
http://blogs.oracle.com/roch/entry/when_to_and_not_to
For additional information about RAID-Z storage pool recommendations, see Chapter 12, Recommended Oracle Solaris ZFS Practices.
The ZFS hybrid storage pool, available in Oracle's Sun Storage 7000 product series, is a special storage pool that combines DRAM, SSDs, and HDDs, to improve performance and increase capacity, while reducing power consumption. With this product's management interface, you can select the ZFS redundancy configuration of the storage pool and easily manage other configuration options.
For more information about this product, see the Sun Storage Unified Storage System Administration Guide.
ZFS provides self-healing data in a mirrored or RAID-Z configuration.
When a bad data block is detected, not only does ZFS fetch the correct data from another redundant copy, but it also repairs the bad data by replacing it with the good copy.
ZFS dynamically stripes data across all top-level virtual devices. The decision about where to place data is done at write time, so no fixed-width stripes are created at allocation time.
When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and disk space allocation policies. Each virtual device can also be a mirror or a RAID-Z device that contains other disk devices or files. This configuration gives you flexibility in controlling the fault characteristics of your pool. For example, you could create the following configurations out of four disks:
Four disks using dynamic striping
One four-way RAID-Z configuration
Two two-way mirrors using dynamic striping
Although ZFS supports combining different types of virtual devices within the same pool, avoid this practice. For example, you can create a pool with a two-way mirror and a three-way RAID-Z configuration. However, your fault tolerance is as good as your worst virtual device, RAID-Z in this case. A best practice is to use top-level virtual devices of the same type with the same redundancy level in each device.