Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Unavailable Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Designating Hot Spares in Your Storage Pool
Activating and Deactivating Hot Spares in Your Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
The following sections provide detailed information about the following storage pool components:
The most basic element of a storage pool is physical storage. Physical storage can be any block device of at least 128 MB in size. Typically, this device is a hard drive that is visible to the system in the /dev/dsk directory.
A storage device can be a whole disk (c1t0d0) or an individual slice (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not require special formatting. ZFS formats the disk using an EFI label to contain a single, large slice. When used in this way, the partition table that is displayed by the format command appears similar to the following:
Current partition table (original): Total disk sectors available: 143358287 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 68.36GB 143358320 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 143358321 8.00MB 143374704
When Oracle Solaris 11.1 is installed, a EFI (GPT) labeled is applied to root pool disks on an x86 based system in most cases, which looks similar to the following:
Current partition table (original): Total disk sectors available: 27246525 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 BIOS_boot wm 256 256.00MB 524543 1 usr wm 524544 12.74GB 27246558 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 27246559 8.00MB 27262942
In the above output, partition 0 (BIOS boot) contains required GPT boot information. Similar to partition 8, it requires no administration and should not be modified. The root file system is contained in partition 1.
On a SPARC system with updated firmware that has been installed with Oracle Solaris 11.1, an EFI (GPT) disk label is applied. For example:
Current partition table (original): Total disk sectors available: 143358320 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 68.36GB 143358320 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 143358321 8.00MB 143374704
Review the following considerations when using whole disks in your ZFS storage pools:
When using a whole disk, the disk is generally named by using the /dev/dsk/cNtNdN naming convention. Some third-party drivers use a different naming convention or place disks in a location other than the /dev/dsk directory. To use these disks, you must manually label the disk and provide a slice to ZFS.
On an x86 based system, the disk must have a valid Solaris fdisk partition. For more information about creating or changing a Solaris fdisk partition, see Setting Up Disks for ZFS File Systems (Task Map) in Oracle Solaris 11.1 Administration: Devices and File Systems.
ZFS applies an EFI label when you create a storage pool with whole disks. For more information about EFI labels, see EFI (GPT) Disk Label in Oracle Solaris 11.1 Administration: Devices and File Systems.
Oracle Solaris 11.1 installer applies an EFI (GPT) label for the root pool disks on a SPARC based system with GPT aware firmware and on an x86 based system, in most cases. For more information, see ZFS Root Pool Requirements.
Disks can be specified by using either the full path, such as /dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid disk names:
c1t0d0
/dev/dsk/c1t0d0
/dev/foo/disk
Disks can be labeled with a legacy Solaris VTOC (SMI) label when you create a storage pool with a disk slice, but using disk slices for a pool is not recommended because management of disk slices is more difficult.
On a SPARC based system, a 72-GB disk has 68 GB of usable space located in slice 0 as shown in the following format output:
# format . . . Specify disk (enter its number): 4 selecting c1t1d0 partition> p Current partition table (original): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 14086 68.35GB (14087/0/0) 143349312 1 unassigned wm 0 0 (0/0/0) 0 2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0
On an x86 based system, a 72-GB disk has 68 GB of usable disk space located in slice 0, as shown in the following format output. A small amount of boot information is contained in slice 8. Slice 8 requires no administration and cannot be changed.
# format . . . selecting c1t0d0 partition> p Current partition table (original): Total disk cylinders available: 49779 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 49778 68.36GB (49778/0/0) 143360640 1 unassigned wu 0 0 (0/0/0) 0 2 backup wm 0 - 49778 68.36GB (49779/0/0) 143363520 3 unassigned wu 0 0 (0/0/0) 0 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wu 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.41MB (1/0/0) 2880 9 unassigned wu 0 0 (0/0/0) 0
An fdisk partition also exists on an x86 based system. An fdisk partition is represented by a /dev/dsk/cN[tN]dNpN device name and acts as a container for the disk's available slices. Do not use a cN[tN]dNpN device for a ZFS storage pool component because this configuration is neither tested nor supported.
ZFS also allows you to use files as virtual devices in your storage pool. This feature is aimed primarily at testing and enabling simple experimentation, not for production use.
If you create a ZFS pool backed by files on a UFS file system, then you are implicitly relying on UFS to guarantee correctness and synchronous semantics.
If you create a ZFS pool backed by files or volumes that are created on another ZFS pool, then the system might deadlock or panic.
However, files can be quite useful when you are first trying out ZFS or experimenting with more complicated configurations when insufficient physical devices are present. All files must be specified as complete paths and must be at least 64 MB in size.
Review the following considerations when creating and managing ZFS storage pools.
Using whole physical disks is the easiest way to create ZFS storage pools. ZFS configurations become progressively more complex, from management, reliability, and performance perspectives, when you build pools from disk slices, LUNs in hardware RAID arrays, or volumes presented by software-based volume managers. The following considerations might help you determine how to configure ZFS with other hardware or software storage solutions:
If you construct a ZFS configuration on top of LUNs from hardware RAID arrays, you need to understand the relationship between ZFS redundancy features and the redundancy features offered by the array. Certain configurations might provide adequate redundancy and performance, but other configurations might not.
You can construct logical devices for ZFS using volumes presented by software-based volume managers. However, these configurations are not recommended. Although ZFS functions properly on such devices, less-than-optimal performance might be the result.
For additional information about storage pool recommendations, see Chapter 12, Recommended Oracle Solaris ZFS Practices.
Disks are identified both by their path and by their device ID, if available. On systems where device ID information is available, this identification method allows devices to be reconfigured without updating ZFS. Because device ID generation and management can vary by system, export the pool first before moving devices, such as moving a disk from one controller to another controller. A system event, such as a firmware update or other hardware change, might change the device IDs in your ZFS storage pool, which can cause the devices to become unavailable.