EMC Unity Storage Performance testing with Oracle ASM and SLOB

I’ve been testing the new EMC Unity 600F all-flash storage array with an Oracle database to determine the impact of storage compression on Oracle I/O performance.  To do this I have been using Kevin Closson’s SLOB tool.

SLOB is an excellent tool for testing I/O performance as seen by Oracle.  Unlike Swingbench which mimics a real application and therefore spends much of its execution cycle on the server CPU, SLOB concentrates exclusively on generating I/O from the Oracle stack.

Conversely, don’t expect SLOB to generate meaningful data to test ranking or sorting operations inside of Oracle.  SLOB generates entirely synthetic data that is meaningless from an application standpoint.

The following posts covers using SLOB to test I/O performance, and what was learned from the testing against the Unity 600F all-flash array.

SLOB installs against an Oracle database.  It does not install against a pluggable database.

Create a SLOB data set large enough to yield a meaningful test.  You need enough data not only to exceed the limits of the SGA’s block cache, but also the array’s cache too.  There is little point in a data set that fits in the cache.  All you are proving at that point is that intra-memory transfers are really fast.

Before launching the SLOB install process, there are some preparation steps that should be completed:

  • Turn off archivelog mode in the database.  There is no point in generating gigabytes of archivelogs while setting up your test.
  • Create the SLOBDATA tablespace.  I recommend using a BIGFILE tablespace allowing up to 32TB of data.
  • Switch the database to use a BIGFILE temporary tablespace.  SLOB does not generate temporary data like Swingbench does, but some temporary data is generated.
  • Resize the redo log files to at least 1GB.  The default 50MB redo logs will cause excessive log file swicthes which will slow the data generation.
  • Set the parameter filesystemio_options to SETALL to faciliate parallel queuing of redo writes.
  • Set the DB_WRITER_PROCESSES parameter to at least the CPU count divided by 4 to maximize write performance during the load.  If your server can handle a greater write load then increase the value of DB_WRITER_PROCESSES.
  • Set the LOAD_PARALLEL_DEGREE in the slob.conf file to twice the number of CPUs as seen by Oracle.  This will allow SLOB to generate data in parallel.
  • Create a public directory on the /tmp directory.

Download and unzip the SLOB code.  Once unzipped, change to the wait_kit directory and issue the make command:

[oracle@oel6burn wait_kit]$ make
rm -fr *.o mywait trigger create_sem
cc -c -o mywait.o mywait.c
cc -o mywait mywait.o
cc -c -o trigger.o trigger.c
cc -o trigger trigger.o
cc -c -o create_sem.o create_sem.c
cc -o create_sem create_sem.o
cp mywait trigger create_sem ../
rm -fr *.o
[oracle@oel6burn wait_kit]$

Connect to the database and create the SLOBDATA tablespace.  If you size this to accommodate the entire SLOB data set it will save time auto-extending the datafile during the load.

SQL> create bigfile tablespace slobdata datafile '+DATA' size 128G autoextend on;

Tablespace created.

Switch the database temporary tablespace to use a BIGFILE tempfile.

SQL> create bigfile temporary tablespace bigtemp;

Tablespace created.

SQL> alter database default temporary tablespace bigtemp;

Database altered.

SQL> drop tablespace temp;

Tablespace dropped.

SQL> create bigfile temporary tablespace temp;

Tablespace created.

SQL> alter database default temporary tablespace temp;

Database altered.

SQL> drop tablespace bigtemp;

Set up a pubic directory to the local /tmp directory:

SQL> create directory mydir as '/tmp';

Directory created.

SQL> grant read,write on directory mydir to public;

Grant succeeded.

Once you have prepared the database, launch the set up script.  In this example we are creating 128 SLOB schemas, each with 8GB of data.  The size of each schema is controlled by the SCALE parameter in the slob.conf file.  128 8GB schemas will yield 1TB of test data, enough to exceed the size of the SGA and the cache of the Unity array.

Unlike Swingbench, SLOB does not generate significant amounts of additional index data.

[oracle@oel6burn SLOB]$ ./setup.sh SLOBDATA 128
NOTIFY : 2017.03.04-20:31:11 :
NOTIFY : 2017.03.04-20:31:11 : Begin SLOB setup.
NOTIFY : 2017.03.04-20:31:11 : Load parameters from slob.conf:

SCALE: 8G (1048576 blocks)

Connect strings to be used:

NOTIFY : 2017.03.04-20:31:11 : Testing connectivity to the instance to validate slob.conf settings
NOTIFY : 2017.03.04-20:31:11 : Testing Admin connect using "/ as sysdba"
NOTIFY : 2017.03.04-20:31:11 : Successful test connection: "sqlplus -L / as sysdba"

output removed to aid clarity

NOTIFY  : 2017.03.04-20:40:17 : NOTE: No errors detected but if ./slob_data_load_summary.txt shows errors then
NOTIFY  : 2017.03.04-20:40:17 : examine /home/oracle/SLOB/cr_tab_and_load.out

NOTIFY  : 2017.03.04-20:40:17 : SLOB setup complete. Total setup time:  (546 seconds)

If we get a clean build, we are ready to test the I/O to the array.

If you are comparing two or more competing storage solutions, make sure you design tests that are consistent across the solutions.  Use the same slob.conf files and use the same servers.  Often vendors use a hyper-converged solution, where your workload is moved to new state of the art servers which are significantly faster than existing production machines, and if the servers are not consistent across tests, you have failed to test the storage layer appropriately.

In my test I want to understand the impact of using Unity compression on my Oracle workload, both on performance, and on the capacity consumed by my Oracle data.

The Unity 600F all-flash array offers the ability to compress data.  This feature is only available on LUNs and Datastores located on all-flash tiers.  Compression is not available for data that resides, or may be moved to by automatic tiering, on spinning disk.

Existing LUNs or Datastores may have compression enabled, but in doing this only newly written or updated data will be compressed.  Similarly turning compression off on a LUN or Datastore will only affect new data that is written or updated.

My entire database resides on an ASM diskgroup that is comprised of four 300GB ASM disks that reside on all-flash non-compressed LUNs.  This will establish a baseline.

Since I expect any impact of compression to become more pronounced with a heavier write load, I will test with a 100% read (0% update), 70/30 read/write (30% update) and a 50/50 read/write (50% update) workload.  Each test will run for 300 seconds.  Each test will be repeated 3 times to try to eliminate any anomalies that might occur during any test.

The relevant part of my slob.conf looks as follows:


To run SLOB, we use the runit.sh script in the SLOB directory.  In this example I am going to use all of the 128 schemas I created, and specify one thread per schema:

./runit.sh -s 128 -t 1

Be careful not to overload the CPU and memory capabilities of your server.  If the test server is overloaded and starts to thrash, you are no longer testing Oracle I/O but Linux’s ability to handle an oversized processing load.

SLOB generates an AWR snap before the test starts, and another when the test completes. It then generates an AWR report between the two collections and writes it as awr.txt in the SLOB directory.

We can inspect this AWR report to see the I/O observed by Oracle.

The following ksh script pulls the Tablespace IO Stats from the AWR report.

[oracle@unirac01 SLOB]$ fn=awr.txt; export fn | tail -n +`cat $fn | grep -n "Tablespace IO Stats" | cut -f1 -d":"` $fn | head -n 25
Tablespace IO Stats DB/Inst: SLOB/slob1 Snaps: 359-360
-> ordered by IOs (Reads + Writes) desc

                          Av    Av      Av    1-bk  Av 1-bk % Opt        Writes
Tablespace      Reads  Rds/s Rd(ms) Blks/Rd  Rds/s   Rd(ms) Reads Writes  avg/s
---------- ---------- ------ ------ ------- ------ -------- ----- ------ ------
SLOBDATA   12,124,290 39,796    1.6     1.0 39,796      1.6    .0     93      0 
SYSAUX            162      1    2.6     1.0      1      2.5    .0    310      1
UNDOTBS1          153      1   17.6     1.0      1     17.6    .0    223      1
SYSTEM             75      0    7.3     1.0      0      7.3    .0     17      0 
UNDOTBS2            4      0   17.5     1.0      0     17.5    .0      4      0

From the above output, we can see AWR recorded 39,796 reads at an average 1.6ms per read, and zero writes against the SLOBDATA tablespace.

This second ksh script pulls more detailed I/O stats from the AWR report:

[oracle@unirac01 SLOB]$ fn=awr_1.txt; export fn | tail -n +`cat $fn | grep -n "physical read IO requests" | cut -f1 -d":"` $fn | head -n 35 | grep physical
physical read IO requests                24,564,203       80,744.1   1,023,508.5
physical read bytes                 201,232,334,848  661,463,251.8 8.3846806E+09
physical read total IO requests          24,565,836       80,749.4   1,023,576.5
physical read total bytes           201,259,089,920  661,551,197.4 8.3857954E+09
physical read total multi block                   4            0.0           0.2
physical reads                           24,564,494       80,745.0   1,023,520.6
physical reads cache                     24,564,491       80,745.0   1,023,520.5
physical reads cache prefetch            23,298,621       76,584.0     970,775.9
physical reads direct                             3            0.0           0.1
physical reads direct (lob)                       3            0.0           0.1
physical reads direct temporary                   0            0.0           0.0
physical reads prefetch warmup                    0            0.0           0.0
physical write IO requests                      428            1.4          17.8
physical write bytes                      9,330,688       30,670.6     388,778.7
physical write total IO requests              1,084            3.6          45.2
physical write total bytes               23,367,680       76,811.0     973,653.3
physical write total multi block                 32            0.1           1.3
physical writes                               1,139            3.7          47.5
physical writes direct                          477            1.6          19.9
physical writes direct (lob)                      9            0.0           0.4
physical writes direct temporary                299            1.0          12.5
physical writes from cache                      662            2.2          27.6
physical writes non checkpoint                1,111            3.7          46.3

The following ksh script pulls the average CPU idle from the vmstat.out generated by SLOB during its execution

[oracle@unirac01 SLOB]$ tot=0; cnt=0 ; for i in $( awk '{ print $15; }' vmstat.out | grep -oE '[0-9]+' ) ; do let tot=$tot+$i ; ((cnt++)) ; done ; let fin=$tot/$cnt; echo "CPU idle average "$fin
CPU idle average 6

If the average CPU idle drops below 4% it is possible your results are degraded by the server being overloaded.

Since my VMs are quite small, I decided to create a two-node RAC database and drive I/O from both nodes.

When configuring SLOB in a RAC database, SLOB offers the ability to direct specific users to specific nodes, and prevent the same schemas being accessed from more than one node.  This is important for I/O testing, since if the same schemas are connected on all nodes, Oracle blocks access may end up being through cache fusion and bypass physical I/O altogether.

I set up two entries in my TNSNAMES.ORA file – slob1 and slob2.

SLOB lets me direct I/O by using the SQLNET_SERVICE_MAX directive in the slob.conf file:

# Settings for SQL*Net connectivity:

In this example, when I launch runit.sh, SLOB will send connection 1 to TNS alias slob1, and connection 2 to TNS alias slob2 in a round-robin until all 128 users are connected.

This functionality allows me to easily scale my SLOB test across as many RAC nodes as I want to use for the I/O test.  This can be important for hyper-performance arrays such as the DSSD D5 which can drive far more I/O than a single server can consume.

So I will run a total of 18 tests.  Three tests each of uncompressed LUNs with 0% updates, 30% updates and 50% updates.  I will then use ASM to add four compressed LUNs to the diskgroup, and remove the four uncompressed LUNs, allowing ASM to move the blocks from the uncompressed disks to the compressed disks.

Then the nine tests will be repeated.

And this is what I observed with my 2-node RAC.

test reads1 avtm1 reads2 avtm2 writes1 writes2 comp update
1 39731 1.6 39732 1.6 0 0 no 0
2 39876 1.6 39875 1.6 0 0 no 0
3 39709 1.6 39689 1.6 0 0 no 0
4 28462 2.2 28457 2.2 8405 8412 no 30
5 28483 2.2 28524 2.2 8424 8430 no 30
6 28329 2.2 28314 2.2 8386 8380 no 30
7 24880 2.5 24879 2.5 11730 11795 no 50
8 23909 2.4 24437 2.5 11306 11542 no 50
9 22917 1.9 25591 2.4 10840 12031 no 50
10 28063 2.2 28078 2.2 0 0 yes 0
11 28642 2.2 28669 2.2 0 0 yes 0
12 28743 2.2 28787 2.2 0 0 yes 0
13 21542 2.9 21568 2.9 6394 6400 yes 30
14 21074 2.9 21093 2.9 6282 6312 yes 30
15 21443 2.9 21376 2.9 6369 6405 yes 30
16 18854 3.3 18888 3.3 8877 8854 yes 50
17 18517 3.3 18467 3.3 8711 8744 yes 50
18 18729 3.3 18624 3.3 8767 8808 yes 50

The first three tests show about 40K read IOPs with about 1.6ms latency.

Test 4 thru 6 added a 30% update load.  I observed about 28K read IOPs per node, and about 8400 write IOPs.  Latency increased to 2.2ms.

Tests 7 thru 9 added a 50% update load.  Read IOPs per node are about 24K with about 2.5ms of latency.

Then I move the data to compressed LUNs.

Tests 10 thru 12 mimic tests 1 thru 3, with no updates but now on compressed LUNs.  Read IOPs are about 28K, compared to almost 40K uncompressed, with latency at 2.2ms, compared to 1.6ms previously.

Tests 13 thru 15 add a 30% update.  Read IOPs are now measured around 21K per node without around 6300 write IOPs.  Latency increases to 2.9ms.

Finally tests 14 thru 15 add a 50% update.  Read IOPs are now around 18.5K and write IOPs around 8800.  Latency now increases to 3.3ms.

From these tests we can see that Unity compression increases latency on Oracle ASM disks. But how does it affect capacity?

Checking the Unisphere interface for the consistency group of the compressed LUNs, we see 1.1TB of allocated space is consuming 386.2GB.


Clicking on the LUN tab we can see a measured compression ratio of 4.5:1.


Checking the Unisphere interface during these tests, we see the processors reaching almost 60% during the 50/50 read/write tests on compressed LUNs. so the array still has some head room and can handle more load.



SLOB is a great tool for testing Oracle I/O performance.

Plan your performance tests carefully to ensure that you really are testing I/O and not Oracle cache performance or the performance of your servers.

Create and test enough data to exceed the database and array caches.

Repeat tests several times to identify anomalies in your tests.  Single tests prove nothing.

Use AWR to see what Oracle observed as I/O performance.  I/O performance measured at other points in the stack may not be representative of what Oracle sees.

You may not be able to achieve the hero numbers vendors publish.  Remember many vendors use a 4K block size 100% read work load to achieve the high IOPs numbers they publish.  That is not representative of a real-world workload.

Unity all-flash compression allows for impressive space savings, up to 4.5:1.

Unity compression does impact I/O performance, especially in write heavy environments. This makes it ideal for ASM disk groups such as the FRA where lower levels of performance may be okay, but should be used with caution if I/O performance is of the highest priority.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s