Skip Navigation Links | |
Exit Print View | |
Managing Network File Systems in Oracle Solaris 11.1 Oracle Solaris 11.1 Information Library |
1. Managing Network File Systems (Overview)
What's New With the NFS Service
Significant Changes in This Release
Significant Changes in Earlier Releases
Kerberos Support for the NFS Service
Solaris 7 Extensions for NFS Mounting
2. Network File System Administration (Tasks)
This section describes the important features that are included in the NFS service.
Version 2 was the first version of the NFS protocol in wide use. Version 2 continues to be available on a large variety of platforms. All Oracle Solaris releases support version 2 of the NFS protocol.
Unlike the NFS version 2 protocol, the NFS version 3 protocol can handle files that are larger than 2 Gbytes. The previous limitation has been removed. See NFS Large File Support.
The NFS version 3 protocol enables safe asynchronous writes on the server, which improve performance by allowing the server to cache client write requests in memory. The client does not need to wait for the server to commit the changes to disk, so the response time is faster. Also, the server can batch the requests, which improves the response time on the server.
Many Solaris NFS version 3 operations return the file attributes, which are stored in the local cache. Because the cache is updated more often, the need to do a separate operation to update this data arises less often. Therefore, the number of RPC calls to the server is reduced, improving performance.
The process for verifying file access permissions has been improved. Version 2 generated a “write error” message or a “read error” message if users tried to copy a remote file without the appropriate permissions. In version 3, the permissions are checked before the file is opened, so the error is reported as an “open error.”
The NFS version 3 protocol removed the 8-Kbyte transfer size limit. Clients and servers could negotiate whatever transfer size the clients and servers support, rather than conform to the 8-Kbyte limit that version 2 imposed. Note that in earlier Solaris implementations, the protocol defaulted to a 32-Kbyte transfer size. Starting in the Solaris 10 release, restrictions on wire transfer sizes are relaxed. The transfer size is based on the capabilities of the underlying transport.
NFS version 4 has features that are not available in the previous versions.
The NFS version 4 protocol represents the user ID and the group ID as strings. nfsmapid is used by the client and the server to do the following:
To map these version 4 ID strings to a local numeric IDs
To map the local numeric IDs to version 4 ID strings
For more information, refer to nfsmapid Daemon.
Note that in NFS version 4, the ID mapper, nfsmapid, is used to map user or group IDs in ACL entries on a server to user or group IDs in ACL entries on a client. The reverse is also true. For more information, see ACLs and nfsmapid in NFS Version 4.
With NFS version 4, when you unshare a file system, all the state for any open files or file locks in that file system is destroyed. In NFS version 3 the server maintained any locks that the clients had obtained before the file system was unshared. For more information, refer to Unsharing and Resharing a File System in NFS Version 4.
NFS version 4 servers use a pseudo file system to provide clients with access to exported objects on the server. Prior to NFS version 4 a pseudo file system did not exist. For more information, refer to File-System Namespace in NFS Version 4.
In NFS version 2 and version 3 the server returned persistent file handles. NFS version 4 supports volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.
Delegation, a technique by which the server delegates the management of a file to a client, is supported on both the client and the server. For example, the server could grant either a read delegation or a write delegation to a client. For more information, refer to Delegation in NFS Version 4.
NFS version 4 does not support the LIPKEY/SPKM security flavor.
Also, NFS version 4 does not use the following daemons:
lockd
nfslogd
statd
For a complete list of the features in NFS version 4, refer to Features in NFS Version 4.
For procedural information that is related to using NFS version 4, refer to Setting Up NFS Services.
The SMF repository includes parameters to control the NFS protocols that are used by both the client and the server. For example, you can use parameters to manage version negotiation. For more information, refer to mountd Daemon for the client parameters, nfsd Daemon for the server parameters, or the nfs(4) man page.
Access control list (ACL) support was added in the Solaris 2.5 release. An access control list (ACL) provides a finer-grained mechanism to set file access permissions than is available through standard UNIX file permissions. NFS ACL support provides a method of changing and viewing ACL entries from a Oracle Solaris NFS client to a Oracle Solaris NFS server.
The NFS version 2 and version 3 implementations support the old POSIX-draft style ACLs. POSIX-draft ACLs are natively supported by UFS. See Using Access Control Lists to Protect UFS Files in Oracle Solaris 11.1 Administration: Security Services for more information about UFS ACLs.
The NFS Version 4 protocol supports the new NFSv4 style ACLs. NFSv4 ACLs are natively supported by ZFS. For full featured NFSv4 ACL functionality, ZFS must be used as the underlying file system on the NFSv4 server. The NFSv4 ACLs have a rich set of inheritance properties, as well as a set of permission bits beyond the standard read, write and execute. See Chapter 7, Using ACLs and Attributes to Protect Oracle Solaris ZFS Files, in Oracle Solaris 11.1 Administration: ZFS File Systems for an overview of the new ACLs. For more information about support for ACLs in NFS version 4, see ACLs and nfsmapid in NFS Version 4.
The default transport protocol for the NFS protocol was changed to the Transport Control Protocol (TCP) in the Solaris 2.5 release. TCP helps performance on slow networks and wide area networks. TCP also provides congestion control and error recovery. NFS over TCP works with version 2, version 3, and version 4. Prior to the Solaris 2.5 release, the default NFS protocol was User Datagram Protocol (UDP).
Note - If InfiniBand hardware is available on the system the default transport changes from TCP to Remote Direct Memory Access (RDMA) protocol. For more information, see NFS Over RDMA. Note, however, that if you use the proto=tcp mount option, NFS mounts are forced to use TCP only.
Starting in the Solaris 10 release, the NFS client no longer uses an excessive number of UDP ports. Previously, NFS transfers over UDP used a separate UDP port for each outstanding request. Now, by default, the NFS client uses only one UDP reserved port. However, this support is configurable. If the use of more simultaneous ports would increase system performance through increased scalability, then the system can be configured to use more ports. This capability also mirrors the NFS over TCP support, which has had this kind of configurability since its inception. For more information, refer to the Oracle Solaris 11.1 Tunable Parameters Reference Manual.
Note - NFS version 4 does not use UDP. If you mount a file system with the proto=udp option, then NFS version 3 is used instead of version 4.
If InfiniBand hardware is available on the system the default transport changes from TCP to Remote Direct Memory Access (RDMA) protocol. The RDMA protocol is a technology for memory-to-memory transfer of data over high speed networks. Specifically, RDMA provides remote data transfer directly to and from memory without CPU intervention. To provide this capability, RDMA combines the interconnect I/O technology of InfiniBand with the Oracle Solaris Operating System. For more information, refer to NFS Over RDMA.
The network lock manager provides UNIX record locking for any files being shared over NFS. The locking mechanism allows clients to synchronize their I/O requests with each other, insuring data integrity.
Note - The Network Lock Manager is used only for NFS version 2 and version 3 mounts. File locking is built into the NFS version 4 protocol.
The Solaris 2.6 implementation of the NFS version 3 protocol was changed to correctly manipulate files that were larger than 2 Gbytes. The NFS version 2 protocol could not handle files that were larger than 2 Gbytes.
Dynamic failover of read-only file systems was added in the Solaris 2.6 release. Failover provides a high level of availability for read-only resources that are already replicated, such as man pages, other documentation, and shared binaries. Failover can occur anytime after the file system is mounted. Manual mounts can now list multiple replicas, much like the automounter in previous releases. The automounter has not changed, except that failover need not wait until the file system is remounted. See How to Use Client-Side Failover and Client-Side Failover for more information.
The NFS service supports Kerberos V5 clients. The mount and share commands have been altered to support NFS version 3 mounts that use Kerberos V5 authentication. Also, the share command was changed to enable multiple authentication flavors for different clients. See RPCSEC_GSS Security Flavor for more information about changes that involve security flavors. See Configuring Kerberos NFS Servers in Oracle Solaris 11.1 Administration: Security Services for information about Kerberos V5 authentication.
The Solaris 2.6 release also included the ability to make a file system on the Internet accessible through firewalls. This capability was provided by using an extension to the NFS protocol. One of the advantages to using the WebNFS protocol for Internet access is its reliability. The service is built as an extension of the NFS version 3 and version 2 protocol. Additionally, the WebNFS implementation provides the ability to share these files without the administrative overhead of an anonymous ftp site. See Security Negotiation for the WebNFS Service for a description of more changes that are related to the WebNFS service. See WebNFS Administration Tasks for more task information.
Note - The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.
A security flavor, called RPCSEC_GSS, is supported in the Solaris 7 release. This flavor uses the standard GSS-API interfaces to provide authentication, integrity, and privacy, as well as enabling support of multiple security mechanisms. See Kerberos Support for the NFS Service for more information about support of Kerberos V5 authentication. See Developer’s Guide to Oracle Solaris 11 Security for more information about GSS-API.
The Solaris 7 release includes extensions to the mount command and automountd command. The extensions enable the mount request to use the public file handle instead of the MOUNT protocol. The MOUNT protocol is the same access method that the WebNFS service uses. By circumventing the MOUNT protocol, the mount can occur through a firewall. Additionally, because fewer transactions need to occur between the server and the client, the mount should occur faster.
The extensions also enable NFS URLs to be used instead of the standard path name. Also, you can use the public option with the mount command and the automounter maps to force the use of the public file handle. See WebNFS Support for more information about changes to the WebNFS service.
A new protocol has been added to enable a WebNFS client to negotiate a security mechanism with an NFS server in the Solaris 8 release. This protocol provides the ability to use secure transactions when using the WebNFS service. See How WebNFS Security Negotiation Works for more information.
In the Solaris 8 release, NFS server logging enables an NFS server to provide a record of file operations that have been performed on its file systems. The record includes information about which file was accessed, when the file was accessed, and who accessed the file. You can specify the location of the logs that contain this information through a set of configuration options. You can also use these options to select the operations that should be logged. This feature is particularly useful for sites that make anonymous FTP archives available to NFS and WebNFS clients. See How to Enable NFS Server Logging for more information.
Note - NFS version 4 does not support server logging.
Autofs works with file systems that are specified in the local namespace. This information can be maintained in NIS or local files.
A fully multithreaded version of automountd is included. This enhancement makes autofs more reliable and enables concurrent servicing of multiple mounts, which prevents the service from hanging if a server is unavailable.
The automountd provides better on-demand mounting. Previous releases would mount an entire set of file systems if the file systems were hierarchically related. Now, only the top file system is mounted. Other file systems that are related to this mount point are mounted when needed.
The autofs service supports browsability of indirect maps. This support enables a user to see which directories could be mounted, without having to actually mount each file system. A -nobrowse option has been added to the autofs maps so that large file systems, such as /net and /home, are not automatically browsable. Also, you can turn off autofs browsability on each client by using the -n option with automount. See Disabling Autofs Browsability for more information.