Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Using Files in a ZFS Storage Pool
Considerations for ZFS Storage Pools
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Cautions For Creating Storage Pools
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Unavailable Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Designating Hot Spares in Your Storage Pool
Activating and Deactivating Hot Spares in Your Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Displaying Information About All Storage Pools or a Specific Pool
Displaying Pool Devices by Physical Locations
Displaying Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
The zpool list command provides several ways to request information regarding pool status. The information available generally falls into three categories: basic usage information, I/O statistics, and health status. All three types of storage pool information are covered in this section.
You can use the zpool list command to display basic information about pools.
With no arguments, the zpool list command displays the following information for all pools on the system:
# zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - dozer 1.2T 384G 816G 32% ONLINE -
This command output displays the following information:
The name of the pool.
The total size of the pool, equal to the sum of the sizes of all top-level virtual devices.
The amount of physical space allocated to all datasets and internal metadata. Note that this amount differs from the amount of disk space as reported at the file system level.
For more information about determining available file system space, see ZFS Disk Space Accounting.
The amount of unallocated space in the pool.
The amount of disk space used, expressed as a percentage of the total disk space.
The current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
The alternate root of the pool, if one exists.
For more information about alternate root pools, see Using ZFS Alternate Root Pools.
You can also gather statistics for a specific pool by specifying the pool name. For example:
# zpool list tank NAME SIZE ALLOC FREE CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE -
You can use the zpool list interval and count options to gather statistics over a period of time. In addition, you can display a time stamp by using the -T option. For example:
# zpool list -T d 3 2 Tue Nov 2 10:36:11 MDT 2010 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool 33.8G 83.5K 33.7G 0% 1.00x ONLINE - rpool 33.8G 12.2G 21.5G 36% 1.00x ONLINE - Tue Nov 2 10:36:14 MDT 2010 pool 33.8G 83.5K 33.7G 0% 1.00x ONLINE - rpool 33.8G 12.2G 21.5G 36% 1.00x ONLINE -
You can use the zpool status -l option to display information about the physical location of pool devices. Reviewing the physical location information is helpful when you need to physically remove or replace a disk.
In addition, you can use the fmadm add-alias command to include a disk alias name that helps you identify the physical location of disks in your environment. For example:
# fmadm add-alias SUN-Storage-J4400.1002QCQ015 Lab10Rack5...
# zpool status -l tank pool: tank state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug 3 16:00:35 2012 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_02/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_20/disk ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_22/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_14/disk ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_10/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_16/disk ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_01/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_21/disk ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_23/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_15/disk ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_09/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_04/disk ONLINE 0 0 0 mirror-6 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_08/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_05/disk ONLINE 0 0 0 mirror-7 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_07/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_11/disk ONLINE 0 0 0 mirror-8 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_06/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_19/disk ONLINE 0 0 0 mirror-9 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_00/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_13/disk ONLINE 0 0 0 mirror-10 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_03/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_18/disk ONLINE 0 0 0 spares /dev/chassis/Lab10Rack5.../DISK_17/disk AVAIL /dev/chassis/Lab10Rack5.../DISK_12/disk AVAIL errors: No known data errors
Specific statistics can be requested by using the -o option. This option provides custom reports or a quick way to list pertinent information. For example, to list only the name and size of each pool, you use the following syntax:
# zpool list -o name,size NAME SIZE tank 80.0G dozer 1.2T
The column names correspond to the properties that are listed in Displaying Information About All Storage Pools or a Specific Pool.
The default output for the zpool list command is designed for readability and is not easy to use as part of a shell script. To aid programmatic uses of the command, the -H option can be used to suppress the column headings and separate fields by tabs, rather than by spaces. For example, to request a list of all pool names on the system, you would use the following syntax:
# zpool list -Ho name tank dozer
Here is another example:
# zpool list -H -o name,size tank 80.0G dozer 1.2T
ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.
For example, the following syntax displays the command output for the root pool:
# zpool history History for 'rpool': 2012-04-06.14:02:55 zpool create -f rpool c3t0d0s0 2012-04-06.14:02:56 zfs create -p -o mountpoint=/export rpool/export 2012-04-06.14:02:58 zfs set mountpoint=/export rpool/export 2012-04-06.14:02:58 zfs create -p rpool/export/home 2012-04-06.14:03:03 zfs create -p -V 2048m rpool/swap 2012-04-06.14:03:08 zfs set primarycache=metadata rpool/swap 2012-04-06.14:03:09 zfs create -p -V 4094m rpool/dump 2012-04-06.14:26:47 zpool set bootfs=rpool/ROOT/s11u1 rpool 2012-04-06.14:31:15 zfs set primarycache=metadata rpool/swap 2012-04-06.14:31:46 zfs create -o canmount=noauto -o mountpoint=/var/share rpool/VARSHARE 2012-04-06.15:22:33 zfs set primarycache=metadata rpool/swap 2012-04-06.16:42:48 zfs set primarycache=metadata rpool/swap 2012-04-09.16:17:24 zfs snapshot -r rpool/ROOT@yesterday 2012-04-09.16:17:54 zfs snapshot -r rpool/ROOT@now
You can use similar output on your system to identify the actual ZFS commands that were executed to troubleshoot an error condition.
The features of the history log are as follows:
The log cannot be disabled.
The log is saved persistently on disk, which means that the log is saved across system reboots.
The log is implemented as a ring buffer. The minimum size is 128 KB. The maximum size is 32 MB.
For smaller pools, the maximum size is capped at 1 percent of the pool size, where the size is determined at pool creation time.
The log requires no administration, which means that tuning the size of the log or changing the location of the log is unnecessary.
To identify the command history of a specific storage pool, use syntax similar to the following:
# zpool history tank 2012-01-25.16:35:32 zpool create -f tank mirror c3t1d0 c3t2d0 spare c3t3d0 2012-02-17.13:04:10 zfs create tank/test 2012-02-17.13:05:01 zfs snapshot -r tank/test@snap1
Use the -l option to display a long format that includes the user name, the host name, and the zone in which the operation was performed. For example:
# zpool history -l tank History for 'tank': 2012-01-25.16:35:32 zpool create -f tank mirror c3t1d0 c3t2d0 spare c3t3d0 [user root on tardis:global] 2012-02-17.13:04:10 zfs create tank/test [user root on tardis:global] 2012-02-17.13:05:01 zfs snapshot -r tank/test@snap1 [user root on tardis:global]
Use the -i option to display internal event information that can be used for diagnostic purposes. For example:
# zpool history -i tank History for 'tank': 2012-01-25.16:35:32 zpool create -f tank mirror c3t1d0 c3t2d0 spare c3t3d0 2012-01-25.16:35:32 [internal pool create txg:5] pool spa 33; zfs spa 33; zpl 5; uts tardis 5.11 11.1 sun4v 2012-02-17.13:04:10 zfs create tank/test 2012-02-17.13:04:10 [internal property set txg:66094] $share2=2 dataset = 34 2012-02-17.13:04:31 [internal snapshot txg:66095] dataset = 56 2012-02-17.13:05:01 zfs snapshot -r tank/test@snap1 2012-02-17.13:08:00 [internal user hold txg:66102] <.send-4736-1> temp = 1 ...
To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity, as well as updated statistics for every specified interval. The following statistics are reported:
The amount of data currently stored in the pool or device. This amount differs from the amount of disk space available to actual file systems by a small margin due to internal implementation details.
For more information about the differences between pool space and dataset space, see ZFS Disk Space Accounting.
The amount of disk space available in the pool or device. Like the used statistic, this amount differs from the amount of disk space available to datasets by a small margin.
The number of read I/O operations sent to the pool or device, including metadata requests.
The number of write I/O operations sent to the pool or device.
The bandwidth of all read operations (including metadata), expressed as units per second.
The bandwidth of all write operations, expressed as units per second.
With no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:
# zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 6.05G 61.9G 0 0 786 107 tank 31.3G 36.7G 4 1 296K 86.1K ---------- ----- ----- ----- ----- ----- -----
Because these statistics are cumulative since boot, bandwidth might appear low if the pool is relatively idle. You can request a more accurate view of current bandwidth usage by specifying an interval. For example:
# zpool iostat tank 2 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank 18.5G 49.5G 0 187 0 23.3M tank 18.5G 49.5G 0 464 0 57.7M tank 18.5G 49.5G 0 457 0 56.6M tank 18.8G 49.2G 0 435 0 51.3M
In the above example, the command displays usage statistics for the pool tank every two seconds until you type Control-C. Alternately, you can specify an additional count argument, which causes the command to terminate after the specified number of iterations.
For example, zpool iostat 2 3 would print a summary every two seconds for three iterations, for a total of six seconds. If there is only a single pool, then the statistics are displayed on consecutive lines. If more than one pool exists, then an additional dashed line delineates each iteration to provide visual separation.
In addition to pool-wide I/O statistics, the zpool iostat command can display I/O statistics for virtual devices. This command can be used to identify abnormally slow devices or to observe the distribution of I/O generated by ZFS. To request the complete virtual device layout as well as all I/O statistics, use the zpool iostat -v command. For example:
# zpool iostat -v capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 6.05G 61.9G 0 0 785 107 mirror 6.05G 61.9G 0 0 785 107 c1t0d0s0 - - 0 0 578 109 c1t1d0s0 - - 0 0 595 109 ---------- ----- ----- ----- ----- ----- ----- tank 36.5G 31.5G 4 1 295K 146K mirror 36.5G 31.5G 126 45 8.13M 4.01M c1t2d0 - - 0 3 100K 386K c1t3d0 - - 0 3 104K 386K ---------- ----- ----- ----- ----- ----- -----
Note two important points when viewing I/O statistics for virtual devices:
First, disk space usage statistics are only available for top-level virtual devices. The way in which disk space is allocated among mirror and RAID-Z virtual devices is particular to the implementation and not easily expressed as a single number.
Second, the numbers might not add up exactly as you would expect them to. In particular, operations across RAID-Z and mirrored devices will not be exactly equal. This difference is particularly noticeable immediately after a pool is created, as a significant amount of I/O is done directly to the disks as part of pool creation, which is not accounted for at the mirror level. Over time, these numbers gradually equalize. However, broken, unresponsive, or offline devices can affect this symmetry as well.
You can use the same set of options (interval and count) when examining virtual device statistics.
You can also display physical location information about the pool's virtual devices. For example:
# zpool iostat -lv capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- export 2.39T 2.14T 13 27 42.7K 300K mirror 490G 438G 2 5 8.53K 60.3K /dev/chassis/lab10rack15/SCSI_Device__2/disk - - 1 0 4.47K 60.3K /dev/chassis/lab10rack15/SCSI_Device__3/disk - - 1 0 4.45K 60.3K mirror 490G 438G 2 5 8.62K 59.9K /dev/chassis/lab10rack15/SCSI_Device__4/disk - - 1 0 4.52K 59.9K /dev/chassis/lab10rack15/SCSI_Device__5/disk - - 1 0 4.48K 59.9K mirror 490G 438G 2 5 8.60K 60.2K /dev/chassis/lab10rack15/SCSI_Device__6/disk - - 1 0 4.50K 60.2K /dev/chassis/lab10rack15/SCSI_Device__7/disk - - 1 0 4.49K 60.2K mirror 490G 438G 2 5 8.47K 60.1K /dev/chassis/lab10rack15/SCSI_Device__8/disk - - 1 0 4.42K 60.1K /dev/chassis/lab10rack15/SCSI_Device__9/disk - - 1 0 4.43K 60.1K . . .
ZFS provides an integrated method of examining pool and device health. The health of a pool is determined from the state of all its devices. This state information is displayed by using the zpool status command. In addition, potential pool and device failures are reported by fmd, displayed on the system console, and logged in the /var/adm/messages file.
This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information about troubleshooting and data recovery, see Chapter 10, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
A pool's health status is described by one of four states:
A pool with one or more failed devices, but the data is still available due to a redundant configuration.
A pool that has all devices operating normally.
A pool that is waiting for device connectivity to be restored. A SUSPENDED pool remains in the wait state until the device issue is resolved.
A pool with corrupted metadata, or one or more unavailable devices, and insufficient replicas to continue functioning.
Each pool device can fall into one of the following states:
The virtual device has experienced a failure but can still function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised, as a subsequent fault in another device might be unrecoverable.
The device has been explicitly taken offline by the administrator.
The device or virtual device is in normal working order. Although some transient errors might still occur, the device is otherwise in working order.
The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms.
The device or virtual device cannot be opened. In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is UNAVAIL, then nothing in the pool can be accessed.
The health of a pool is determined from the health of all its top-level virtual devices. If all virtual devices are ONLINE, then the pool is also ONLINE. If any one of the virtual devices is DEGRADED or UNAVAIL, then the pool is also DEGRADED. If a top-level virtual device is UNAVAIL or OFFLINE, then the pool is also UNAVAIL or SUSPENDED. A pool in the UNAVAIL or SUSPENDED state is completely inaccessible. No data can be recovered until the necessary devices are attached or repaired. A pool in the DEGRADED state continues to run, but you might not achieve the same level of data redundancy or data throughput than if the pool were online.
The zpool status command also provides details about resilver and scrub operations.
Resilver in-progress report. For example:
scan: resilver in progress since Wed Jun 20 14:19:38 2012 7.43G scanned out of 71.8G at 36.4M/s, 0h30m to go 7.43G resilvered, 10.35% done
Scrub in-progress report. For example:
scan: scrub in progress since Wed Jun 20 14:56:52 2012 529M scanned out of 71.8G at 48.1M/s, 0h25m to go 0 repaired, 0.72% done
Resilver completion message. For example:
scan: resilvered 71.8G in 0h14m with 0 errors on Wed Jun 20 14:33:42 2012
Scrub completion message. For example:
scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012
Ongoing scrub cancellation message. For example:
scan: scrub canceled on Wed Jun 20 16:04:40 2012
Scrub and resilver completion messages persist across system reboots
You can quickly review pool health status by using the zpool status command as follows:
# zpool status -x all pools are healthy
Specific pools can be examined by specifying a pool name in the command syntax. Any pool that is not in the ONLINE state should be investigated for potential problems, as described in the next section.
You can request a more detailed health summary status by using the -v option. For example:
# zpool status -v pond pool: pond state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jun 20 15:38:08 2012 config: NAME STATE READ WRITE CKSUM pond DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 UNAVAIL 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 device details: c0t5000C500335F907Fd0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. see: http://support.oracle.com/msg/ZFS-8000-LR for recovery errors: No known data errors
This output displays a complete description of why the pool is in its current state, including a readable description of the problem and a link to a knowledge article for more information. Each knowledge article provides up-to-date information about the best way to recover from your current problem. Using the detailed configuration information, you can determine which device is damaged and how to repair the pool.
In the preceding example, the UNAVAIL device should be replaced. After the device is replaced, use the zpool online command to bring the device online, if necessary. For example:
# zpool online pond c0t5000C500335F907Fd0 warning: device 'c0t5000C500335DC60Fd0' onlined, but remains in degraded state # zpool status -x all pools are healthy
The above output identifies that the device remains in a degraded state until any resilvering is complete.
If the autoreplace property is on, you might not have to online the replaced device.
If a pool has an offline device, the command output identifies the problem pool. For example:
# zpool status -x pool: pond state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. config: NAME STATE READ WRITE CKSUM pond DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 OFFLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 errors: No known data errors
The READ and WRITE columns provide a count of I/O errors that occurred on the device, while the CKSUM column provides a count of uncorrectable checksum errors that occurred on the device. Both error counts indicate a potential device failure, and some corrective action is needed. If non-zero errors are reported for a top-level virtual device, portions of your data might have become inaccessible.
The errors: field identifies any known data errors.
In the preceding example output, the offline device is not causing data errors.
For more information about diagnosing and repairing UNAVAIL pools and data, see Chapter 10, Oracle Solaris ZFS Troubleshooting and Pool Recovery.
You can use the zpool status interval and count options to gather statistics over a period of time. In addition, you can display a time stamp by using the -T option. For example:
# zpool status -T d 3 2 Wed Jun 20 16:10:09 MDT 2012 pool: pond state: ONLINE scan: resilvered 9.50K in 0h0m with 0 errors on Wed Jun 20 16:07:34 2012 config: NAME STATE READ WRITE CKSUM pond ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335BA8C3d0s0 ONLINE 0 0 0 c0t5000C500335FC3E7d0s0 ONLINE 0 0 0 errors: No known data errors Wed Jun 20 16:10:12 MDT 2012 pool: pond state: ONLINE scan: resilvered 9.50K in 0h0m with 0 errors on Wed Jun 20 16:07:34 2012 config: NAME STATE READ WRITE CKSUM pond ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335BA8C3d0s0 ONLINE 0 0 0 c0t5000C500335FC3E7d0s0 ONLINE 0 0 0 errors: No known data errors