Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 11.1 Tunable Parameters Reference Manual Oracle Solaris 11.1 Information Library |
1. Overview of Oracle Solaris System Tuning
2. Oracle Solaris Kernel Tunable Parameters
3. Oracle Solaris ZFS Tunable Parameters
Where to Find Tunable Parameter Information
Tuning ZFS When Using Flash Storage
Adding Flash Devices as ZFS Log or Cache Devices
Ensuring Proper Cache Flush Behavior for Flash and NVRAM Storage Devices
SCSI Unmap Considerations for Flash Devices
Tuning ZFS for Database Products
Tuning ZFS for an Oracle Database
Using ZFS with MySQL Considerations
5. Internet Protocol Suite Tunable Parameters
A. Tunable Parameters Change History
This parameter controls the maximum number of concurrent I/Os pending to each device.
Integer
10
0 to MAXINT
Yes
No
In a storage array where LUNs are made of a large number of disk drives, the ZFS queue can become a limiting factor on read IOPS. This behavior is one of the underlying reasoning for the best practice of presenting as many LUNS as there are backing spindles to the ZFS storage pool. That is, if you create LUNS from a 10 disk-wide array level raid-group, then using 5 to 10 LUNs to build a storage pool allows ZFS to manage enough of an I/O queue without the need to set this specific tunable.
However, when no separate intent log is in use and the pool is made of JBOD disks, using a small zfs_vdev_max_pending value, such as 10, can improve the synchronous write latency as those are competing for the disk resource. Using separate intent log devices can alleviate the need to tune this parameter for loads that are synchronously write intensive since those synchronous writes are not competing with a deep queue of non-synchronous writes.
Tuning this parameter is not expected to be effective for NVRAM-based storage arrays in the case where volumes are made of small number of spindles. However, when ZFS is presented with a volume made of a large (greater than 10) number of spindles, then this parameter can limit the read throughput obtained on the volume. The reason is that with a maximum of 10 or 35 queued I/Os per LUN, this can translate into less than 1 I/O per storage spindle, which is not enough for individual disks to deliver their IOPS. This issue would appear in iostat actv queue output approaching the value of zfs_vdev_max_pending.
Device drivers may also limit the number of outstanding I/Os per LUN. If you are using LUNs on storage arrays that can handle large numbers of concurrent IOPS, then the device driver constraints can limit concurrency. Consult the configuration for the drivers your system uses. For example, the limit for the QLogic ISP2200, ISP2300, and SP212 family FCl HBA (qlc) driver is described as the execution-throttle parameter in /kernel/drv/qlc.conf.
Unstable