Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
ZFS File System Space Reporting
ZFS Storage Pool Space Reporting
Missing Devices in a ZFS Storage Pool
Damaged Devices in a ZFS Storage Pool
Checking ZFS File System Integrity
Controlling ZFS Data Scrubbing
ZFS Data Scrubbing and Resilvering
Determining If Problems Exist in a ZFS Storage Pool
Overall Pool Status Information
Pool Configuration Information
System Reporting of ZFS Error Messages
Repairing a Damaged ZFS Configuration
Physically Reattaching a Device
Notifying ZFS of Device Availability
Replacing or Repairing a Damaged Device
Determining the Type of Device Failure
Replacing a Device in a ZFS Storage Pool
Determining If a Device Can Be Replaced
Devices That Cannot be Replaced
Replacing a Device in a ZFS Storage Pool
Identifying the Type of Data Corruption
Repairing a Corrupted File or Directory
Repairing Corrupted Data With Multiple Block References
Repairing ZFS Storage Pool-Wide Damage
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
ZFS is designed to be robust and stable despite errors. Even so, software bugs or certain unexpected problems might cause the system to panic when a pool is accessed. As part of the boot process, each pool must be opened, which means that such failures will cause a system to enter into a panic-reboot loop. To recover from this situation, ZFS must be informed not to look for any pools on startup.
ZFS maintains an internal cache of available pools and their configurations in /etc/zfs/zpool.cache. The location and contents of this file are private and are subject to change. If the system becomes unbootable, boot to the milestone none by using the -m milestone=none boot option. After the system is up, remount your root file system as writable and then rename or move the /etc/zfs/zpool.cache file to another location. These actions cause ZFS to forget that any pools exist on the system, preventing it from trying to access the unhealthy pool causing the problem. You can then proceed to a normal system state by issuing the svcadm milestone all command. You can use a similar process when booting from an alternate root to perform repairs.
After the system is up, you can attempt to import the pool by using the zpool import command. However, doing so will likely cause the same error that occurred during boot, because the command uses the same mechanism to access pools. If multiple pools exist on the system, do the following:
Rename or move the zpool.cache file to another location as discussed in the preceding text.
Determine which pool might have problems by using the fmdump -eV command to display the pools with reported fatal errors.
Import the pools one by one, skipping the pools that are having problems, as described in the fmdump output.