Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
ZFS File System Space Reporting
ZFS Storage Pool Space Reporting
Missing Devices in a ZFS Storage Pool
Damaged Devices in a ZFS Storage Pool
Checking ZFS File System Integrity
Controlling ZFS Data Scrubbing
ZFS Data Scrubbing and Resilvering
Determining If Problems Exist in a ZFS Storage Pool
Overall Pool Status Information
Repairing a Damaged ZFS Configuration
Physically Reattaching a Device
Notifying ZFS of Device Availability
Replacing or Repairing a Damaged Device
Determining the Type of Device Failure
Replacing a Device in a ZFS Storage Pool
Determining If a Device Can Be Replaced
Devices That Cannot be Replaced
Replacing a Device in a ZFS Storage Pool
Identifying the Type of Data Corruption
Repairing a Corrupted File or Directory
Repairing Corrupted Data With Multiple Block References
Repairing ZFS Storage Pool-Wide Damage
Repairing an Unbootable System
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
The following sections describe how to identify and resolve problems with your ZFS file systems or storage pools:
You can use the following features to identify problems with your ZFS configuration:
Detailed ZFS storage pool information can be displayed by using the zpool status command.
Pool and device failures are reported through ZFS/FMA diagnostic messages.
Previous ZFS commands that modified pool state information can be displayed by using the zpool history command.
Most ZFS troubleshooting involves the zpool status command. This command analyzes the various failures in a system and identifies the most severe problem, presenting you with a suggested action and a link to a knowledge article for more information. Note that the command only identifies a single problem with a pool, though multiple problems can exist. For example, data corruption errors generally imply that one of the devices has failed, but replacing the failed device might not resolve all of the data corruption problems.
In addition, a ZFS diagnostic engine diagnoses and reports pool failures and device failures. Checksum, I/O, device, and pool errors associated with these failures are also reported. ZFS failures as reported by fmd are displayed on the console as well as the system messages file. In most cases, the fmd message directs you to the zpool status command for further recovery instructions.
The basic recovery process is as follows:
If appropriate, use the zpool history command to identify the ZFS commands that preceded the error scenario. For example:
# zpool history tank History for 'tank': 2010-07-15.12:06:50 zpool create tank mirror c0t1d0 c0t2d0 c0t3d0 2010-07-15.12:06:58 zfs create tank/eric 2010-07-15.12:07:01 zfs set checksum=off tank/eric
In this output, note that checksums are disabled for the tank/eric file system. This configuration is not recommended.
Identify the errors through the fmd messages that are displayed on the system console or in the /var/adm/messages file.
Find further repair instructions by using the zpool status -x command.
Repair the failures, which involves the following steps:
Replacing the unavailable or missing device and bring it online.
Restoring the faulted configuration or corrupted data from a backup.
Verifying the recovery by using the zpool status -x command.
Backing up your restored configuration, if applicable.
This section describes how to interpret zpool status output in order to diagnose the type of failures that can occur. Although most of the work is performed automatically by the command, it is important to understand exactly what problems are being identified in order to diagnose the failure. Subsequent sections describe how to repair the various problems that you might encounter.
The easiest way to determine if any known problems exist on a system is to use the zpool status -x command. This command describes only pools that are exhibiting problems. If no unhealthy pools exist on the system, then the command displays the following:
# zpool status -x all pools are healthy
Without the -x flag, the command displays the complete status for all pools (or the requested pool, if specified on the command line), even if the pools are otherwise healthy.
For more information about command-line options to the zpool status command, see Querying ZFS Storage Pool Status.
The complete zpool status output looks similar to the following:
# zpool status pond pool: pond state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jun 20 13:16:09 2012 config: NAME STATE READ WRITE CKSUM pond DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 DEGRADED 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 UNAVAIL 0 0 0 errors: No known data errors
This output is described in the following section.
This section in the zpool status output contains the following fields, some of which are only displayed for pools exhibiting problems:
Identifies the name of the pool.
Indicates the current health of the pool. This information refers only to the ability of the pool to provide the necessary replication level.
Describes what is wrong with the pool. This field is omitted if no errors are found.
A recommended action for repairing the errors. This field is omitted if no errors are found.
Refers to a knowledge article containing detailed repair information. Online articles are updated more often than this guide can be updated. So, always reference them for the most up-to-date repair procedures. This field is omitted if no errors are found.
Identifies the current status of a scrub operation, which might include the date and time that the last scrub was completed, a scrub is in progress, or if no scrub was requested.
Identifies known data errors or the absence of known data errors.
The config field in the zpool status output describes the configuration of the devices in the pool, as well as their state and any errors generated from the devices. The state can be one of the following: ONLINE, FAULTED, DEGRADED, or SUSPENDED. If the state is anything but ONLINE, the fault tolerance of the pool has been compromised.
The second section of the configuration output displays error statistics. These errors are divided into three categories:
READ – I/O errors that occurred while issuing a read request
WRITE – I/O errors that occurred while issuing a write request
CKSUM – Checksum errors, meaning that the device returned corrupted data as the result of a read request
These errors can be used to determine if the damage is permanent. A small number of I/O errors might indicate a temporary outage, while a large number might indicate a permanent problem with the device. These errors do not necessarily correspond to data corruption as interpreted by applications. If the device is in a redundant configuration, the devices might show uncorrectable errors, while no errors appear at the mirror or RAID-Z device level. In such cases, ZFS successfully retrieved the good data and attempted to heal the damaged data from existing replicas.
For more information about interpreting these errors, see Determining the Type of Device Failure.
Finally, additional auxiliary information is displayed in the last column of the zpool status output. This information expands on the state field, aiding in the diagnosis of failures. If a device is UNAVAIL, this field indicates whether the device is inaccessible or whether the data on the device is corrupted. If the device is undergoing resilvering, this field displays the current progress.
For information about monitoring resilvering progress, see Viewing Resilvering Status.
The scrub section of the zpool status output describes the current status of any explicit scrubbing operations. This information is distinct from whether any errors are detected on the system, though this information can be used to determine the accuracy of the data corruption error reporting. If the last scrub ended recently, most likely, any known data corruption has been discovered.
The following zpool status scrub status messages are provided:
Scrub in-progress report. For example:
scan: scrub in progress since Wed Jun 20 14:56:52 2012 529M scanned out of 71.8G at 48.1M/s, 0h25m to go 0 repaired, 0.72% done
Scrub completion message. For example:
scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012
Ongoing scrub cancellation message. For example:
scan: scrub canceled on Wed Jun 20 16:04:40 2012
Scrub completion messages persist across system reboots.
For more information about the data scrubbing and how to interpret this information, see Checking ZFS File System Integrity.
The zpool status command also shows whether any known errors are associated with the pool. These errors might have been found during data scrubbing or during normal operation. ZFS maintains a persistent log of all data errors associated with a pool. This log is rotated whenever a complete scrub of the system finishes.
Data corruption errors are always fatal. Their presence indicates that at least one application experienced an I/O error due to corrupt data within the pool. Device errors within a redundant pool do not result in data corruption and are not recorded as part of this log. By default, only the number of errors found is displayed. A complete list of errors and their specifics can be found by using the zpool status -v option. For example:
# zpool status -v tank pool: tank state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://support.oracle.com/msg/ZFS-8000-8A scan: scrub repaired 0 in 0h0m with 2 errors on Fri Jun 29 16:58:58 2012 config: NAME STATE READ WRITE CKSUM tank ONLINE 2 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 2 0 0 errors: Permanent errors have been detected in the following files: /tank/file.1
A similar message is also displayed by fmd on the system console and the /var/adm/messages file. These messages can also be tracked by using the fmdump command.
For more information about interpreting data corruption errors, see Identifying the Type of Data Corruption.
In addition to persistently tracking errors within the pool, ZFS also displays syslog messages when events of interest occur. The following scenarios generate notification events:
Device state transition – If a device becomes FAULTED, ZFS logs a message indicating that the fault tolerance of the pool might be compromised. A similar message is sent if the device is later brought online, restoring the pool to health.
Data corruption – If any data corruption is detected, ZFS logs a message describing when and where the corruption was detected. This message is only logged the first time it is detected. Subsequent accesses do not generate a message.
Pool failures and device failures – If a pool failure or a device failure occurs, the fault manager daemon reports these errors through syslog messages as well as the fmdump command.
If ZFS detects a device error and automatically recovers from it, no notification occurs. Such errors do not constitute a failure in the pool redundancy or in data integrity. Moreover, such errors are typically the result of a driver problem accompanied by its own set of error messages.