Oracle Database Performance Testing the Easy Way

Oracle Database Performance Testing the Easy Way

We have developed a tool for testing Oracle database performance. The tool connects to a database, executes a set of SELECT statements, and records their execution times. It can also execute queries in parallel by using a connection pool of multiple connections, thus simulating a real-life workload on a database. Performance test runs can be scheduled to be run regularly, allowing for continuous monitoring of database performance, which is more relevant than ever before in the new era of self-tuning cloud databases.

The Database Performance Testing Tool is written in Python. It can be run on any environment that has the latest Python 3 installed and an Oracle database client available. If the database to be tested is in the cloud and uses wallet-based authentication, the wallet must be set up on the environment. In addition, the database performance testing tool requires its own dedicated metadata schema, where the test scenarios will be defined and the test results stored.

To set up a database performance test environment, we first need to create a dedicated metadata schema and the tables in it. (The tool comes with DDL scripts for metadata schema setup.) The metadata tables that can be split into two groups:

Test Input metadata tables:

  • DPT_DATABASE: This table holds connection details to all the databases to be performance-tested. We may want to run bespoke performance tests for different databases or to compare database performance before and after migration.
  • DPT_SCRIPT: This table is where we add our performance test SQL SELECT statements. In addition to the SELECT statement itself, we can also indicate what BI dashboard and report the test script is based on in case of a BI performance test.
  • DPT_SCRIPT_BASELINE: Optionally, we can specify an execution duration baseline that comes from an external source. A good example of this is Oracle OBIEE Usage Tracking data for BI dashboards and reports. We can import Usage Tracking data into this table, which will allow us to compare the results from Database Performance Testing Tool to Usage Tracking.
  • DPT_SCENARIO: When Database Performance Testing Tool runs, it runs a particular Test Scenario. A Test Scenario will have an ID and a descriptive name, such as "Quarterly BI HR Reports Performance Test." The purpose of a Test Scenario is to define the testing content, which is a set of SELECT statements representing a particular aspect of performance testing.
  • DPT_SCENARIO_ITEM: This is a many-to-many bridge table between the DPT_SCENARIO and DPT_SCRIPT tables.

Test output metadata tables:

  • DPT_SCENARIO_RUN: This table contains one record per Database Performance Test run. It includes the start and finish timestamps of the test run, the number of scripts executed, a success flag (set to true if all test items executed successfully), an indication of whether the test scripts were run sequentially or in parallel, and, in the case of a parallel run, the number of connections used for the test.
  • DPT_SCENARIO_ITEM_RUN: This table contains one record per executed test script. It includes script execution start and finish timestamps, a success flag, and an error message in case of a failure.

Please note that the duration of a Test Scenario run will match the summary duration of the items only in the case of a sequential test execution. In the case of a parallel execution, the duration of a Test Scenario run should be shorter.

Let us try it out. We want to run the test scenario ID=1 on the database ID=1. And we want to execute each test script 3 times. The following shell command achieves that:

python .\DBPerformanceTester.py 1 1 -i 3

The result looks like this in console:

In the Test Scenario Item Run table, we get the following records:

Notice that one of the scripts failed with a syntax error but the test carried on. Here the execution times are excellent because the scripts run on a very small dataset.

For testing purposes, a PL/SQL function was written that creates an artificial delay for a given number of seconds and can be called from a SELECT statement. A Test Scenario was created with 4 Test Scripts that run for 1, 3, 5 and 7 seconds accordingly. Also, this time we want our test to be a stress test with 5 connections in the connection pool and want each item to be executed 7 times. We also want to compare the records against the Baseline ID=33.

We run the following command:

python .\DBPerformanceTester.py 2 1 -s 5 -i 7 -b 33

The result looks like this in console:

Notice the different output format because of the parallel connections used. Now, because of longer test script execution times, we can see that script executions overlap.

This is the result in the Test Scenario Item Run table:

Now, based on the output above, we can create a BI report that draws us a chart of DB performance over time, alerts us of any errors encountered and allows us to explore the causes for DB performance issues. (Please note that analysis and reports are not part of the Tool.) We can schedule it to be run daily at a certain time, for example, right before the business hours. (For scheduling we use our preferred scheduler - scheduling is not part of the Tool itself.)

This Database Performance Testing Tool is a simple yet efficient means to monitor database query performance and can be scheduled to monitor performance trends over time.