Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Administration: ZFS File Systems Oracle Solaris 11.1 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Managing ZFS Root Pool Components
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
Using a ZFS Volume as a Swap or Dump Device
Using a ZFS Volume as an iSCSI LUN
Using ZFS on a Solaris System With Zones Installed
Adding ZFS File Systems to a Non-Global Zone
Delegating Datasets to a Non-Global Zone
Adding ZFS Volumes to a Non-Global Zone
Using ZFS Storage Pools Within a Zone
Managing ZFS Properties Within a Zone
Using ZFS Alternate Root Pools
Creating ZFS Alternate Root Pools
Importing Alternate Root Pools
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Archiving Snapshots and Root Pool Recovery
12. Recommended Oracle Solaris ZFS Practices
The following sections describe how to use ZFS on a system with Oracle Solaris zones:
Keep the following points in mind when associating ZFS datasets with zones:
You can add a ZFS file system or a clone to a non-global zone with or without delegating administrative control.
You can add a ZFS volume as a device to non-global zones.
You cannot associate ZFS snapshots with zones at this time.
In the following sections, a ZFS dataset refers to a file system or a clone.
Adding a dataset allows the non-global zone to share disk space with the global zone, though the zone administrator cannot control properties or create new file systems in the underlying file system hierarchy. This operation is identical to adding any other type of file system to a zone and should be used when the primary purpose is solely to share common disk space.
ZFS also allows datasets to be delegated to a non-global zone, giving complete control over the dataset and all its children to the zone administrator. The zone administrator can create and destroy file systems or clones within that dataset, as well as modify properties of the datasets. The zone administrator cannot affect datasets that have not been added to the zone, including exceeding any top-level quotas set on the delegated dataset.
Consider the following when working with ZFS on a system with Oracle Solaris zones installed:
A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy.
When both a source zonepath and a target zonepath reside on a ZFS file system and are in the same pool, zoneadm clone will now automatically use the ZFS clone to clone a zone. The zoneadm clone command will create a ZFS snapshot of the source zonepath and set up the target zonepath. You cannot use the zfs clone command to clone a zone. For more information, see Part II, Oracle Solaris Zones, in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
You can add a ZFS file system as a generic file system when the goal is solely to share space with the global zone. A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy. For example, if the tank/zone/zion file system will be added to a non-global zone, set the mountpoint property in the global zone as follows:
# zfs set mountpoint=legacy tank/zone/zion
You can add a ZFS file system to a non-global zone by using the zonecfg command's add fs subcommand.
In the following example, a ZFS file system is added to a non-global zone by a global zone administrator from the global zone:
# zonecfg -z zion zonecfg:zion> add fs zonecfg:zion:fs> set type=zfs zonecfg:zion:fs> set special=tank/zone/zion zonecfg:zion:fs> set dir=/opt/data zonecfg:zion:fs> end
This syntax adds the ZFS file system, tank/zone/zion, to the already configured zion zone, which is mounted at /opt/data. The mountpoint property of the file system must be set to legacy, and the file system cannot already be mounted in another location. The zone administrator can create and destroy files within the file system. The file system cannot be remounted in a different location, nor can the zone administrator change properties on the file system such as atime, readonly, compression, and so on. The global zone administrator is responsible for setting and controlling properties of the file system.
For more information about the zonecfg command and about configuring resource types with zonecfg, see Part II, Oracle Solaris Zones, in Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
To meet the primary goal of delegating the administration of storage to a zone, ZFS supports adding datasets to a non-global zone through the use of the zonecfg command's add dataset subcommand.
In the following example, a ZFS file system is delegated to a non-global zone by a global zone administrator from the global zone.
# zonecfg -z zion zonecfg:zion> add dataset zonecfg:zion:dataset> set name=tank/zone/zion zonecfg:zion:dataset> set alias=tank zonecfg:zion:dataset> end
Unlike adding a file system, this syntax causes the ZFS file system tank/zone/zion to be visible within the already configured zion zone. Within the zion zone, this file system is not accessible as tank/zone/zion, but as a virtual pool named tank. The delegated file system alias provides a view of the original pool to the zone as a virtual pool. The alias property specifies the name of the virtual pool. If no alias is specified, a default alias matching the last component of the file system name is used. If a specific alias is not provided, the default alias in the above example would have been zion.
Within delegated datasets, the zone administrator can set file system properties, as well as create descendent file systems. In addition, the zone administrator can create snapshots and clones, and otherwise control the entire file system hierarchy. If ZFS volumes are created within delegated file systems, it is possible for them to conflict with ZFS volumes that are added as device resources. For more information, see the next section and dev(7FS).
You can add or create a ZFS volume in a non-global zone or you can add access to a volume's data in a non-global zone in the following ways:
In a non-global zone, a privileged zone administrator can create a ZFS volume as descendent of a previously delegated file system. For example:
# zfs create -V 2g tank/zone/zion/vol1
The above syntax means that the zone administrator can manage the volume's properties and data in the non-global zone.
In a global zone, use the zonecfg add dataset subcommand and specify a ZFS volume to be added to a non-global zone. For example:
# zonecfg -z zion zonecfg:zion> add dataset zonecfg:zion:dataset> set name=tank/volumes/vol1 zonecfg:zion:dataset> end
The above syntax means that the zone administrator can manage the volume's properties and data in the non-global zone.
In a global zone, use the zonecfg add device subcommand and specify a ZFS volume whose data can be accessed in a non-global zone. For example:
# zonecfg -z zion zonecfg:zion> add device zonecfg:zion:device> set match=/dev/zvol/dsk/tank/volumes/vol2 zonecfg:zion:device> end
The above syntax means that only the volume data can be accessed in the non-global zone.
ZFS storage pools cannot be created or modified within a zone. The delegated administration model centralizes control of physical storage devices within the global zone and control of virtual storage to non-global zones. Although a pool-level dataset can be added to a zone, any command that modifies the physical characteristics of the pool, such as creating, adding, or removing devices, is not allowed from within a zone. Even if physical devices are added to a zone by using the zonecfg command's add device subcommand, or if files are used, the zpool command does not allow the creation of any new pools within the zone.
After a dataset is delegated to a zone, the zone administrator can control specific dataset properties. After a dataset is delegated to a zone, all its ancestors are visible as read-only datasets, while the dataset itself is writable, as are all of its descendents. For example, consider the following configuration:
global# zfs list -Ho name tank tank/home tank/data tank/data/matrix tank/data/zion tank/data/zion/home
If tank/data/zion were added to a zone with the default zion alias, each dataset would have the following properties.
|
Note that every parent of tank/zone/zion is invisible and all descendants are writable. The zone administrator cannot change the zoned property because doing so would expose a security risk that described in the next section.
Privileged users in the zone can change any other settable property, except for quota and reservation properties. This behavior allows the global zone administrator to control the disk space consumption of all datasets used by the non-global zone.
In addition, the share.nfs and mountpoint properties cannot be changed by the global zone administrator after a dataset has been delegated to a non-global zone.
When a dataset is delegated to a non-global zone, the dataset must be specially marked so that certain properties are not interpreted within the context of the global zone. After a dataset has been delegated to a non-global zone and is under the control of a zone administrator, its contents can no longer be trusted. As with any file system, there might be setuid binaries, symbolic links, or otherwise questionable contents that might adversely affect the security of the global zone. In addition, the mountpoint property cannot be interpreted in the context of the global zone. Otherwise, the zone administrator could affect the global zone's namespace. To address the latter, ZFS uses the zoned property to indicate that a dataset has been delegated to a non-global zone at one point in time.
The zoned property is a boolean value that is automatically turned on when a zone containing a ZFS dataset is first booted. A zone administrator does not need to manually turn on this property. If the zoned property is set, the dataset cannot be mounted or shared in the global zone. In the following example, tank/zone/zion has been delegated to a zone, while tank/zone/global has not:
# zfs list -o name,zoned,mountpoint -r tank/zone NAME ZONED MOUNTPOINT tank/zone/global off /tank/zone/global tank/zone/zion on /tank/zone/zion # zfs mount tank/zone/global /tank/zone/global tank/zone/zion /export/zone/zion/root/tank/zone/zion
Note the difference between the mountpoint property and the directory where the tank/zone/zion dataset is currently mounted. The mountpoint property reflects the property as it is stored on disk, not where the dataset is currently mounted on the system.
When a dataset is removed from a zone or a zone is destroyed, the zoned property is not automatically cleared. This behavior is due to the inherent security risks associated with these tasks. Because an untrusted user has had complete access to the dataset and its descendents, the mountpoint property might be set to bad values, or setuid binaries might exist on the file systems.
To prevent accidental security risks, the zoned property must be manually cleared by the global zone administrator if you want to reuse the dataset in any way. Before setting the zoned property to off, ensure that the mountpoint property for the dataset and all its descendents are set to reasonable values and that no setuid binaries exist, or turn off the setuid property.
After you have verified that no security vulnerabilities are left, the zoned property can be turned off by using the zfs set or zfs inherit command. If the zoned property is turned off while a dataset is in use within a zone, the system might behave in unpredictable ways. Only change the property if you are sure the dataset is no longer in use by a non-global zone.
When you need to migrate one or more zones needs to another system, consider using the zfs send and zfs receive commands. Depending on the scenario, it may be best to use a replication streams or recursive streams.
The examples in this section describe how to copy zone data between systems. Additional steps are required to transfer each zone's configuration and attach each zone to the new system. For more information, see Part II, Oracle Solaris Zones, in Oracle Solaris 11.1 Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource Management.
If all zones on one system need to move to another system, consider using a replication stream because it preserves snapshots and clones. Snapshots and clones are used extensively by pkg update, beadm create, and the zoneadm clone commands.
In the following example, the sysA's zones are installed in the rpool/zones file system and they need to be copied to the tank/zones file system on sys. The following commands create a snapshot and copy the data to sysB by using a replication stream:
sysA# zfs snapshot -r rpool/zones@send-to-sysB sysA# zfs send -R rpool/zones@send-to-sysB | ssh sysB zfs receive -d tank
In the following example, one of several zones is copied from sysC to the sysD. Assume that the ssh command is not available but an NFS server instance is available. The following commands might be used to generate a recursive zfs send stream without worrying about whether the zone is a clone of another zone.
sysC# zfs snapshot -r rpool/zones/zone1@send-to-nfs sysC# zfs send -rc rpool/zones/zone1@send-to-nfs > /net/nfssrv/export/scratch/zone1.zfs sysD# zfs create tank/zones sysD# zfs receive -d tank/zones < /net/nfssrv/export/scratch/zone1.zfs