DBMS_RESOURCE_MANAGER.CALIBRATE_IO procedure provides an easy way for measuring storage performance including maximum bandwidth, random IOPS, and latency. The CALIBRATE_IO procedure generates I/O through the database stack on actual database files. The test is read-only and it is safe to run it on any existing database. It is also a good tool for directly comparing performance of two storage systems because the CALIBRATE_IO results do not depend on any non-storage factors, such as memory size or the number of CPU cores.
To measure storage performance with CALIBRATE_IO
- Create or load a database on the corresponding ASM disk group
- Make sure the total size of the database files is larger than 5 GB per disk. If needed, create an additional large table space / data file.
- Customize the first parameter in the SQL code below with the number of disks corresponding to your storage setup. Keep the second parameter (max latency) with the minimum allowed value of 10 milliseconds.
- Connect to the database with sqlplus and run the customized SQL code.
- Wait for the CALIBRATE_IO to complete. This may take 10 to 30 minutes.
Example of running CALIBRATE_IO on a 2-node cluster with Amazon EC2 i3.16xlarge instances and eight NVMe SSDs per node:
SET SERVEROUTPUT ON; DECLARE lat INTEGER; iops INTEGER; mbps INTEGER; BEGIN DBMS_RESOURCE_MANAGER.CALIBRATE_IO (16, 10, iops, mbps, lat); DBMS_OUTPUT.PUT_LINE ('Max_IOPS = ' || iops); DBMS_OUTPUT.PUT_LINE ('Latency = ' || lat); DBMS_OUTPUT.PUT_LINE ('Max_MB/s = ' || mbps); end; / Max_IOPS = 1375694 Latency = 0 Max_MB/s = 27338 PL/SQL procedure successfully completed.