Skip Navigation Links | |
Exit Print View | |
Transitioning From Oracle Solaris 10 to Oracle Solaris 11.1 Oracle Solaris 11.1 Information Library |
1. Transitioning From Oracle Solaris 10 to an Oracle Solaris 11 Release (Overview)
2. Transitioning to an Oracle Solaris 11 Installation Method
Oracle Solaris 11 File System Changes
Root File System Requirements and Changes
Managing ZFS File System Changes
Displaying ZFS File System Information
Resolving ZFS File System Space Reporting Issues
Resolving ZFS Storage Pool Space Reporting Issues
Making ZFS File Systems Available
ZFS File System Sharing Changes
Considering ZFS Backup Features
Migrating File System Data to ZFS File Systems
Recommended Data Migration Practices
Migrating Data With ZFS Shadow Migration
Migrating UFS Data to a ZFS File System (ufsdump and ufsrestore)
6. Managing Software and Boot Environments
7. Managing Network Configuration
8. Managing System Configuration
10. Managing Oracle Solaris Releases in a Virtual Environment
The following ZFS file system features, not available in the Oracle Solaris 10 release, are available in Oracle Solaris 11:
ZFS file system encryption – You can encrypt a ZFS file system when it is created. For more information, see Chapter 9, Managing Security.
ZFS file system deduplication – For important information about determining whether your system environment can support ZFS data deduplication, see ZFS Data Deduplication Requirements.
ZFS file system sharing syntax changes – Includes both NFS and SMB file system sharing changes. For more information, seeZFS File System Sharing Changes.
ZFS man page change – The zfs.1m manual page has been revised so that core ZFS file system features remain in the zfs.1m page, but delegated administration, encryption, and share syntax and examples are covered in the following pages:
After the system is installed, review your ZFS storage pool and ZFS file system information.
Display ZFS storage pool information by using the zpool status command.
Display ZFS file system information by using the zfs list command. For example:
For a description of the root pool components, see Reviewing the Initial ZFS BE After an Installation.
The zpool list and zfs list commands are better than the previous df and du commands for determining your available pool and file system space. With the legacy commands, you cannot easily discern between pool and file system space, nor do the legacy commands account for space that is consumed by descendent file systems or snapshots.
For example, the following root pool (rpool) has 5.46 GB allocated and 68.5 GB free.
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 74G 5.46G 68.5G 7% 1.00x ONLINE -
If you compare the pool space accounting with the file system space accounting by reviewing the USED columns of your individual file systems, you can see that the pool space is accounted for. For example:
# zfs list -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool 5.41G 67.4G 74.5K /rpool rpool/ROOT 3.37G 67.4G 31K legacy rpool/ROOT/solaris 3.37G 67.4G 3.07G / rpool/ROOT/solaris/var 302M 67.4G 214M /var rpool/dump 1.01G 67.5G 1000M - rpool/export 97.5K 67.4G 32K /rpool/export rpool/export/home 65.5K 67.4G 32K /rpool/export/home rpool/export/home/admin 33.5K 67.4G 33.5K /rpool/export/home/admin rpool/swap 1.03G 67.5G 1.00G -
The SIZE value that is reported by the zpool list command is generally the amount of physical disk space in the pool, but varies depending on the pool's redundancy level. See the examples below. The zfs list command lists the usable space that is available to file systems, which is disk space minus ZFS pool redundancy metadata overhead, if any.
Non-redundant storage pool – Created with one 136-GB disk, the zpool list command reports SIZE and initial FREE values as 136 GB. The initial AVAIL space reported by the zfs list command is 134 GB, due to a small amount of pool metadata overhead. For example:
# zpool create tank c0t6d0 # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 136G 95.5K 136G 0% 1.00x ONLINE - # zfs list tank NAME USED AVAIL REFER MOUNTPOINT tank 72K 134G 21K /tank
Mirrored storage pool– Created with two 136-GB disks, zpool list command reports SIZE as 136 GB and initial FREE value as 136 GB. This reporting is referred to as the deflated space value. The initial AVAIL space reported by the zfs list command is 134 GB, due to a small amount of pool metadata overhead. For example:
# zpool create tank mirror c0t6d0 c0t7d0 # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 136G 95.5K 136G 0% 1.00x ONLINE - # zfs list tank NAME USED AVAIL REFER MOUNTPOINT tank 72K 134G 21K /tank
RAID-Z storage pool – Created with three 136-GB disks, the zpool list commands reports SIZE as 408 GB and initial FREE value as 408 GB. This reporting is referred to as the inflated disk space value, which includes redundancy overhead, such as parity information. The initial AVAIL space reported by the zfs list command is 133 GB, due to the pool redundancy overhead. The following example creates a RAIDZ-2 pool.
# zpool create tank raidz2 c0t6d0 c0t7d0 c0t8d0 # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 408G 286K 408G 0% 1.00x ONLINE - # zfs list tank NAME USED AVAIL REFER MOUNTPOINT tank 73.2K 133G 20.9K /tank
Making ZFS file systems available is similar to Oracle Solaris 10 releases in the following ways:
A ZFS file system is mounted automatically when it is created and then remounted automatically when the system is booted.
You do not have to modify the /etc/vfstab file to mount a ZFS file system, unless you create a legacy mount for ZFS file system. Mounting a ZFS file system automatically is recommended over using a legacy mount.
You do not have to modify the /etc/dfs/dfstab file to share file systems. For more information about sharing ZFS file systems, see ZFS File System Sharing Changes.
Similar to a UFS root, the swap device must have an entry in the /etc/vfstab file.
File systems can be shared between Oracle Solaris 10 and Oracle Solaris 11 systems by using NFS sharing.
File systems can be shared between Oracle Solaris 11 systems by using NFS or SMB sharing.
ZFS storage pools can be exported from an Oracle Solaris 10 system and then imported on an Oracle Solaris 11 system.
In Oracle Solaris 10, you could set the sharenfs or sharesmb property to create and publish a ZFS file system share, or you could use the legacy share command.
In Oracle Solaris 11, you create a ZFS file system share and then publish the share as follows:
Create an NFS or SMB share of a ZFS file system by using the zfs set share command.
# zfs create rpool/fs1 # zfs set share=name=fs1,path=/rpool/fs1,prot=nfs rpool/fs1 name=fs1,path=/rpool/fs1,prot=nfs
Publish the NFS or SMB share by setting the sharenfs or sharesmb property to on.
# zfs set sharenfs=on rpool/fs1 # cat /etc/dfs/sharetab /rpool/fs1 fs1 nfs sec=sys,rw
The primary new sharing differences are as follows:
Sharing a file system is a two-step process: creating a share by using the zfs set share command, then publishing the share by setting the sharenfs or sharesmb property.
The zfs set share command replaces the sharemgr interface for sharing ZFS file systems.
The sharemgr interface is no longer available. The legacy share command and the sharenfs property are still available. See the examples below.
The /etc/dfs/dfstab file still exists but modifications are ignored. SMF manages ZFS or UFS share information so that file systems are shared automatically when the system is rebooted, similar to the way ZFS mount and share information is managed.
If you unpublish a share, you can republish it by using the share command or by using the share -a command to republish all shares.
Descendent file systems do not inherit share properties. If a descendent file system is created with an inherited sharenfs property set to on, then a share is created for the new descendent file system.
In Oracle Solaris 11.1, sharing ZFS file systems has improved with the following primary enhancements:
The share syntax is simplified. You can share a file system by setting the new share.nfs or share.smb property.
# zfs set share.nfs=on tank/home
Better inheritance of share properties to descendent file systems. In preceding example, where the share.nfs property is set on the tank/home file system, the share.nfs property value is inherited to any descendent file systems.
# zfs create tank/home/userA # zfs create tank/home/userB
You can also specify additional property values or modify existing property values on existing file system shares.
# zfs set share.nfs.nosuid=on tank/home/userA
These file sharing improvements are associated with pool version 34. For more information, see Sharing and Unsharing ZFS File Systems in Oracle Solaris 11.1 Administration: ZFS File Systems.
Legacy sharing syntax is still supported without having to modify the /etc/dfs/dfstab file. Legacy shares are managed by an SMF service.
Use the share command to share a file system.
For example, to share a ZFS file system:
# share -F nfs /tank/zfsfs # cat /etc/dfs/sharetab /tank/zfsfs - nfs rw
The above syntax is identical to sharing a UFS file system:
# share -F nfs /ufsfs # cat /etc/dfs/sharetab /ufsfs - nfs rw /tank/zfsfs - nfs rw
You can create a file system with the sharenfs property enabled, as in previous releases. The Oracle Solaris 11 behavior is that a default share is created for the file system.
# zfs create -o sharenfs=on rpool/data # cat /etc/dfs/sharetab /rpool/data rpool_data nfs sec=sys,rw
The above file system shares are published immediately.
Review the share transition issues in this section.
Upgrading your system – ZFS shares will be incorrect if you boot back to an older BE due to property changes in this release. Non-ZFS shares are unaffected. If you plan to boot back to an older BE, save a copy of the existing share configuration prior to the pkg update operation to be able to restore the share configuration on the ZFS datasets.
In the older BE, use the sharemgr show -vp command to list all shares and their configuration.
Use the zfs get sharenfs filesystem command and the zfs sharesmb filesystem commands to get the values of the sharing properties.
If you boot back to an older BE, reset the sharenfs and sharesmb properties to their original values.
Legacy unsharing behavior – Using the unshare -a command or unshareall command unpublishes a share, but does not update the SMF shares repository. If you try to re-share the existing share, the shares repository is checked for conflicts, and an error is displayed.
In Oracle Solaris 11, you can use the deduplication (dedup) property to remove redundant data from your ZFS file systems. If a file system has the dedup property enabled, duplicate data blocks are removed synchronously. The result is that only unique data is stored, and common components are shared between files. For example:
# zfs set dedup=on tank/home
Do not enable the dedup property on file systems that reside on production systems until you perform the following steps to determine if your system can support data deduplication.
Determine if your data would benefit from deduplication space savings. If your data is not dedup-able, there is no point in enabling dedup. Running the following command is very memory intensive:
# zdb -S tank Simulated DDT histogram: bucket allocated referenced ______ ______________________________ ______________________________ refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE ------ ------ ----- ----- ----- ------ ----- ----- ----- 1 2.27M 239G 188G 194G 2.27M 239G 188G 194G 2 327K 34.3G 27.8G 28.1G 698K 73.3G 59.2G 59.9G 4 30.1K 2.91G 2.10G 2.11G 152K 14.9G 10.6G 10.6G 8 7.73K 691M 529M 529M 74.5K 6.25G 4.79G 4.80G 16 673 43.7M 25.8M 25.9M 13.1K 822M 492M 494M 32 197 12.3M 7.02M 7.03M 7.66K 480M 269M 270M 64 47 1.27M 626K 626K 3.86K 103M 51.2M 51.2M 128 22 908K 250K 251K 3.71K 150M 40.3M 40.3M 256 7 302K 48K 53.7K 2.27K 88.6M 17.3M 19.5M 512 4 131K 7.50K 7.75K 2.74K 102M 5.62M 5.79M 2K 1 2K 2K 2K 3.23K 6.47M 6.47M 6.47M 8K 1 128K 5K 5K 13.9K 1.74G 69.5M 69.5M Total 2.63M 277G 218G 225G 3.22M 337G 263G 270G dedup = 1.20, compress = 1.28, copies = 1.03, dedup * compress / copies = 1.50
If the estimated dedup ratio is greater than 2, then you might see dedup space savings.
In this example, the dedup ratio (dedup = 1.20) is less than 2, so enabling dedup is not recommended.
Make sure your system has enough memory to support dedup.
Each in-core dedup table entry is approximately 320 bytes.
Multiply the number of allocated blocks times 320. For example:
in-core DDT size = 2.63M x 320 = 841.60M
Dedup performance is best when the deduplication table fits into memory. If the dedup table has to be written to disk, then performance will decrease. If you enable deduplication on your file systems without sufficient memory resources, system performance might degrade during file system related operations. For example, removing a large dedup-enabled file system without sufficient memory resources might impact system performance. .