IBM DB2
IBM DB2 Recipe
- Create a pressure valve with WLM
- Ensure that the version of the DB2 driver matches the DB2 backend version. Note that higher level JDBC drivers are compatible with lower level DB2 servers.
- Before DB2 driver version 4.26.17, set
-Ddb2.jcc.override.timerLevelForQueryTimeOut=2
- On AIX, use MALLOCOPTIONS=buckets,multiheap
- Gather
db2pd -stack all
during issue times.
DB2 JCC Driver
General JCC Recommendations
Ensure that the version of the DB2 driver matches the DB2 backend version. Note that higher level JDBC drivers are compatible with lower level DB2 servers.
Read Timeout
Set a read timeout with blockingReadConnectionTimeout
(defaults to unlimited):
blockingReadConnectionTimeout: The amount of time in seconds before a connection socket read times out. This property applies only to IBM Data Server Driver for JDBC and SQLJ type 4 connectivity, and affects all requests that are sent to the data source after a connection is successfully established. The default is 0. A value of 0 means that there is no timeout.
timerLevelForQueryTimeOut
For both type 2 and type 4 drivers, before version 4.26.17, the
default timerLevelForQueryTimeOut
of QUERYTIMEOUT_STATEMENT_LEVEL
(1) creates a timer object
for each statement execution when there is a non-zero timeout and this
may have a large performance impact. The alternative is to create a
single timer object for each connection with
-Ddb2.jcc.override.timerLevelForQueryTimeOut=2
. In one
case, this improved performance by 65%. Changing the default means
holding the timer and related memory for longer for each connection in a
connection pool, but this is an acceptable cost for most customers for
the improved performance.
The main symptom of this is that thread dumps will show many threads
with stack tops in java/lang/Thread.startImpl
called from
com/ibm/db2 code
. For example:
3XMTHREADINFO "WebContainer : 433" J9VMThread:0x000000003765AA00, j9thread_t:0x000001004F74D8C0, java/lang/Thread:0x0000000591475000, state:R, prio=5
3XMJAVALTHREAD (java/lang/Thread getId:0x4FC, isDaemon:true)
3XMTHREADINFO1 (native thread ID:0x62E0445, native priority:0x5, native policy:UNKNOWN, vmstate:CW, vm thread flags:0x00000001)
3XMCPUTIME CPU usage total: 221.008744000 secs, user: 205.519645000 secs, system: 15.489099000 secs, current category="Application"
3XMHEAPALLOC Heap bytes allocated since last GC cycle=131072 (0x20000)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at java/lang/Thread.startImpl(Native Method)
4XESTACKTRACE at java/lang/Thread.start(Thread.java:948(Compiled Code))
5XESTACKTRACE (entered lock: java/lang/Thread$ThreadLock@0x0000000692FA3730, entry count: 1)
5XESTACKTRACE (entered lock: java/util/TimerThread@0x0000000692FA3698, entry count: 1)
4XESTACKTRACE at java/util/Timer.<init>(Timer.java:187(Compiled Code))
4XESTACKTRACE at java/util/Timer.<init>(Timer.java:157(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/am/wo.a(wo.java:5151(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/am/wo.ec(wo.java:5275(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/am/xo.b(xo.java:4191(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/am/xo.jc(xo.java:760(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/t4/j.jc(j.java:134(Compiled Code))
4XESTACKTRACE at com/ibm/db2/jcc/am/xo.executeQuery(xo.java:725(Compiled Code))
5XESTACKTRACE (entered lock: com/ibm/db2/jcc/t4/b@0x000000063157C588, entry count: 1)
Starting with DB2 driver version 4.26.17, the default has been
changed to -Ddb2.jcc.override.timerLevelForQueryTimeOut=2
Keep-alive
It seems that the DB2 JDBC driver does enable keepalive: https://www-01.ibm.com/support/docview.wss?uid=swg21231084
DB2 sets the TCP/IP keepalive setting on both the client and server by default.
DB2 on z/OS
Collect and archive SMF 100-102 records. During a sustained issue, gather a console dump of the 4 Db2 address spaces (DBM1, DIST, MSTR, IRLM).
Type 2 vs Type 4
Type 2 is native (with an "inability to offload [most] Type 2 work to zAAP" although some may be offloaded after APAR OA29015) and type 4 is mostly Java, therefore, type 4 may use zIIPs/zAAPs; however, type 2 may outperform type 4 with similar GCP usage (or lower in the case of DB2 on a remote LPAR) because of its use of cross-memory technology and because type 4's usage of TCP drives GCP. Additional details:
[Type 4] TCP communications may be done all within the same LPAR (as done in this study) [...] Summary of Results:
- The use of Type 4 resulted in more total processor usage than Type 2.
- The overall general processor (CP) usage showed Type 2 and Type 4 to be approximately equal.
- Performance enhancements in the JDBC Type 2 driver now make it equal or better than the Type 4.
Native Memory Usage
The type 2 DB2 driver mostly uses native memory below the 2GB bar (except LBF) and this usage may compete for JVM under-the-2GB-bar storage required for some data structures when using compressed references. Alternatively, the type 4 driver is pure Java and does not directly use native memory below the bar. Note that some DB2 client-side application functions may require the type 2 driver, although these functions are generally more obscure and not used by applications. In some cases, the type 4 driver may perform worse as it does more processing as part of the DRDA communication protocol and TCP/IP whereas a type 2 driver can use cross-memory calls if it's on the same LPAR as DB2.
Server
Display configuration: db2 get db cfg
DB2 Self-tuning: db2 autoconfigure apply db and dbm
Review the DB2 tuning (software & hardware) in the latest SPECjEnterprise results submitted by IBM: http://www.spec.org/jEnterprise2010/results/res2013q2/jEnterprise2010-20130402-00042.html
Located in the DB2 Control Center, [the DB2 configuration] advisor calculates and displays recommended values for the DB2 buffer pool size, the database, and the database manager configuration parameters, with the option of applying these values. See more information about the advisor in the online help facility within the Control Center.
When configuring the data source settings for the databases, confirm the DB2 MaxAppls setting is greater than the maximum number of connections for the data source. If you are planning to establish clones, set the MaxAppls value as the maximum number of connections multiplied by the number of clones. The same relationship applies to the session manager number of connections. The MaxAppls setting must be equal to or greater than the number of connections. If you are using the same database for session and data sources, set the MaxAppls value as the sum of the number of connection settings for the session manager and the data sources.
For example, MaxAppls = (number of connections set for the data source + number of connections in the session manager) multiplied by the number of clones.
After calculating the MaxAppls settings for the WebSphere Application Server database and each of the application databases, verify that the MaxAgents setting for DB2 is equal to or greater than the sum of all of the MaxAppls values. For example, MaxAgents = sum of MaxAppls for all databases.
For systems with multiple hard disk drives, you can gain large performance improvements by setting the log files for each database on a different hard drive from the database files.
How to view or set: At a DB2 command prompt, issue the command: db2 update db cfg for [database_name] using newlogpath [fully_qualified_path].
Recommended value: Use a separate high-speed drive, preferably performance enhanced through a redundant array of independent disk (RAID) configuration.
If lock escalations are causing performance concerns, you might need to increase the value of [maxlocks] or the locklist parameter... You can use the database system monitor to determine if lock escalations are occurring.
- Maintain current indexes on tables: utilize the DB2 Design Advisor (available from the DB2 Control Center, or command line) to help determine indexes that could improve performance.
- Update catalog statistics: DB2 uses these to optimize access to key tables. The easiest way to maintain statistics is via the DB2 Automatic Table Maintenance feature, which runs the RUNSTATS command in the background as required to ensure that the correct statistics are collected and maintained. By default, this feature is not enabled. It may be turned on from the DB2 Control Center.
- Set buffer pool size correctly: a buffer pool is an area of memory into which database pages are read, modified, and held during processing; accessing pages from the buffer pool is much faster than accessing pages from physical devices. To choose appropriate buffer pool size settings, monitor database container I/O activity, by using system tools or by using DB2 buffer pool snapshots. Be careful to avoid configuring large buffer pool size settings which lead to paging activity on the system.
Put frequently updated columns together and at the end of the row. This has an effect on update performance due to the following logging considerations: For fixed length row updates, DB2 logs from the first changed column to the last changed column. For variable length row updates, DB2 logs from the first changed byte to the end of the row. If the length of a variable length column changes, this will result in a change to the row header (which includes the row length), and thus the entire row will be logged.
Query Execution Times
To get per-query execution times, create a DB2 event monitor (note on the create event monitor command, single quotes around the path are required):
$ mkdir $PATH
$ chmod 777 $PATH
$ db2 connect to <db_name> user <inst_user> using <password>
$ db2 "create event monitor $NAME for statements write to file '$PATH'"
$ db2 "set event monitor $NAME state 1"
To disable an event monitor:
$ db2 "set event monitor $NAME state 0"
To process event monitor data to a human readable form:
$ db2evmon -path $PATH > commands.out
To list all event monitors:
$ db2 "select * from SYSCAT.EVENTMONITORS"
To completely delete an event monitor:
$ db2 "drop event monitor $NAME"
Example of a single query execution from db2evmon output:
Statement Event ...
Text : select id,symbol from MYTABLE
Start Time: 02-09-2010 18:21:46.159875
Stop Time: 02-09-2010 18:21:46.164743
Exec Time: 0.004868 seconds...
Tablespaces
A tablespace is a physical storage object that provides a level of indirection between a database and the tables stored within the database. It is made up of a collection of containers into which database objects are stored. A container is an allocation of space to a table space. Depending on the table space type, the container can be a directory, device, or file.
System Managed Space (SMS): stores data in operating system files. They are an excellent choice for general purposes use. They provide good performance with little administration cost.
Database Managed Space (DMS): with database-managed space (DMS) table spaces, the database manager controls the storage space.
DMS tablespaces usually perform better than SMS tablespaces because they are pre-allocated and do not have to spend time extending files when new rows are added. DMS tablespaces can be either raw devices or file system files. DMS tablespaces in raw device containers provide the best performance because double buffering does not occur. Double buffering, which occurs when data is buffered first at the database manager level and then at the file system level, might be an additional cost for file containers or SMS table spaces.
If you use SMS tablespaces, consider using the db2empfa command on your database. The db2empfa (Enable Multipage File Allocation) tool enables the use of multipage file allocation for a database. With multipage file allocation enabled for SMS table spaces, disk space is allocated one extent rather than one page at a time, improving INSERT throughput.
$ db2 "LIST TABLESPACES SHOW DETAIL"
Buffer Pools
There is no definitive answer to the question of how much memory you should dedicate to the buffer pool. Generally, more is better. A good rule of thumb would be to start with about 75% of your system's main memory devoted to buffer pool(s), but this rule is applicable only if the machine is a dedicated database server.
If your tablespaces have multiple page sizes, then you should create one buffer pool for each page size.
Buffpage is a database configuration parameter. A buffer pool is a memory storage area where database pages containing table rows or index entries are temporarily read and changed. Data is accessed much faster from memory than from disk.
How to view or set: To view the current value of buffpage for database x, issue the DB2 command get db cfg for x and look for the value BUFFPAGE. To set BUFFPAGE to a value of n, issue the DB2 command update db cfg for x using BUFFPAGE n and set NPAGES to -1 as follows: db2 \<-- go to DB2 command mode, otherwise the following "select" does not work as is connect to x \<-- (where x is the particular DB2 database name) select \* from syscat.bufferpools (and note the name of the default, perhaps: IBMDEFAULTBP) (if NPAGES is already -1, there is no need to issue following command) alter bufferpool IBMDEFAULTBP size -1 (re-issue the above "select" and NPAGES now equals -1) You can collect a snapshot of the database while the application is running and calculate the buffer pool hit ratio as follows: Collect the snapshot: Issue the update monitor switches using bufferpool on command. Make sure that bufferpool monitoring is on by issuing the get monitor switches command. Clear the monitor counters with the reset monitor all command. Run the application. Issue the get snapshot for all databases command prior to all applications disconnect from the database, otherwise statistics are lost. Issue the update monitor switches using bufferpool off command. Calculate the hit ratio by looking at the following database snapshot statistics: Buffer pool data logical reads Buffer pool data physical reads Buffer pool index logical reads Buffer pool index physical reads Default value: 250 Recommended value: Continue increasing the value until the snapshot shows a satisfactory hit rate.
The buffer pool hit ratio indicates the percentage of time that the database manager did not need to load a page from disk to service a page request. That is, the page is already in the buffer pool. The greater the buffer pool hit ratio, the lower the frequency of disk input and output. Calculate the buffer pool hit ratio as follows:
P = buffer pool data physical reads + buffer pool index physical reads L = buffer pool data logical reads + buffer pool index logical reads Hit ratio = (1-(P/L)) \* 100%
Indexing
An index is a set of keys, each pointing to a row, or rows in a table. An index serves to ensure uniqueness, as in the case of Primary Key, and to allow more efficient access to rows in a table by creating a direct path to the data through pointers. The SQL optimizer automatically chooses the most efficient way to access data in tables. The optimizer takes indexes into consideration when determining the fastest access path to data.
An index will impact disk storage usage, insert and delete processing, and database maintenance.
The intent of a clustering index is so that the sequence of key values closely corresponds to the sequence of rows stored in a table.
Create as few indexes as possible. Consider creating the INDEXES with the "ALLOW REVERSE SCANS" option. Pay close attention to the order of the columns in the index. Don't create redundant indexes. Use DB2 "Explain" facilities to determine the actual usage of the indexes.
Logging
One of the main purposes of all database systems is to maintain the integrity of your data. All databases maintain log files that keep records of database changes. DB2 logging consists of a set of primary and secondary log files that contain log records that record all changes to a database. The database log is used to roll back changes for units of work that are not committed and to recover a database to a consistent state. DB2 provides two logging strategy choices.
Circular logging is the default log mode. With circular logging, the log records fill the log files and then overwrite the initial log records in the initial log file. The overwritten log records are not recoverable. This type of logging is typically not suited for a production application.
Log Retain logging is a setting where a log is archived when it fills with log records. New log files are made available for log records. Retaining log files enables roll-forward recovery. Roll-forward recovery reapplies changes to the database based on completed units of work (transactions) that are recorded in the log. You can specify that roll-forward recovery is to the end of the logs, or to a particular point in time before the end of the logs. Archived log files are never directly deleted by DB2, therefore, it is the applications' responsibility to maintain them; i.e. archive, purge, etc.
Placement of the log files needs to be optimized, not only for write performance, but also for read performance, because the database manager will need to read the log files during database recovery.
Increase the size of the database configuration Log Buffer parameter (logbufsz). This parameter specifies the amount of the database heap to use as a buffer for log records before writing these records to disk.
Buffering the log records will result in more efficient logging file I/O because the log records will be written to disk less frequently, and more log records will be written at each time.
Reorg
SQL statement performance can deteriorate after many updates, deletes or inserts.
Use the DB2 reorgchk update statistics on table all command to perform the runstats operation on all user and system tables for the database to which you are currently connected. Rebind packages using the bind command. If statistics are available, issue the db2 -v "select tbname, nleaf, nlevels, stats_time from sysibm.sysindexes" command on DB2 CLP. If no statistic updates exist, nleaf and nlevels are -1, and stats_time has an empty entry (for example: "-"). If the runstats command was previously run, the real-time stamp from completion of the runstats operation also displays under stats_time. If you think the time shown for the previous runstats operation is too old, run the runstats command again.
Runstats
The DB2 optimizer uses information and statistics in the DB2 catalog in order to determine the best access to the database based on the query provided. Statistical information is collected for specific tables and indexes in the local database when you execute the RUNSTATS utility. When significant numbers of table rows are added or removed, or if data in columns for which you collect statistics is updated, execute RUNSTATS again to update the statistics.
After running RUNSTATS on your database tables, you need to rebind your applications to take advantage of those new statistics. This is done to ensure the best access plan is being used for your SQL statements. To clear the contents of the SQL cache, use the FLUSH PACKAGE CACHE sql statement.
Explain
Explain allows you to capture information about the access plan chosen by the optimizer as well as performance information that helps you tune queries. Before you can capture explain information, you need to create the relational tables in which the optimizer stores the explain information and you set the special registers that determine what kind of explain information is captured.
- db2 ttf EXPLAIN.DDL (located in sqllib/misc directory)
- db2exfmt" this command line tool is used to display explain information in preformatted output.
db2expln and dynexpln: these command line tools are used to see the access plan information available for one or more packages of static SQL statements. Db2expln shows the actual implementation of the chosen access plan. It does not show optimizer information. The dynexpln tool, which uses db2expln within it, provides a quick way to explain dynamic SQL statements that contain no parameter markers. This use of db2expln from within dynexpln is done by transforming the input SQL statement into a static statement within a pseudo-package. When this occurs, the information may not always be completely accurate. If complete accuracy is desired, use the explain facility. The db2expln tool does provide a relatively compact and English-like overview of what operations will occur at run-time by examining the actual access plan generated.
Isolation Levels
An isolation level determines how data is locked or isolated from other processes while the data is being accessed. The isolation level will be in effect for the duration of the unit of work. DB2 supports the following isolation levels, listed in order of most restrictive to least restrictive:
- Repeatable Read - An isolation level that locks all the rows in an application that are referenced within a transaction. When a program uses repeatable read protection, rows referenced by the program cannot be changed by other programs until the program ends the current transaction.
- Read Stability - An isolation level that locks only the rows that an application retrieves within a transaction. Read stability ensures that any qualifying row that is read during a transaction is not changed by other application processes until the transaction is completed, and that any row changed by another application process is not read until the change is committed by that process.
- Cursor Stability - An isolation level that locks any row accessed by a transaction of an application while the cursor is positioned on the row. The lock remains in effect until the next row is fetched or the transaction is terminated. If any data is changed in a row, the lock is held until the change is committed to the database
- Uncommitted Read - An isolation level that allows an application to access uncommitted changes of other transactions. The application does not lock other applications out of the row that it is reading, unless the other application attempts to drop or alter the table. Sometimes referred to as "Dirty Reads"
Lock Timeouts
To view the current value of the lock timeout property for database xxxxxx, issue the DB2 get db cfg for xxxxxx command and look for the value LOCKTIMEOUT. To set LOCKTIMEOUT to a value of n, issue the DB2 update db cfg for xxxxxx command using LOCKTIMEOUT n, where xxxxxx is the name of the application database and n is a value between 0 and 30 000 inclusive.
Default value: -1, meaning lock timeout detection is turned off. In this situation, an application waits for a lock if one is not available at the time of the request, until either the lock is granted or a deadlock occurs.
Recommended value: If your database access pattern tends toward a majority of writes, set this value so that it gives you early warning when a timeout occurs. A setting of 30 seconds suits this purpose. If your pattern tends toward a majority of reads, either accept the default lock timeout value, or set the property to a value greater than 30 seconds.
If lock escalations occur frequently, increase the value of either locklist or maxlocks, or both.
Query Tuning
Use the OPTIMIZE FOR n ROWS clause to give priority to retrieving the first n rows in the full result set.
Use the FETCH FIRST n ROWS ONLY clause to retrieve only a specified number of rows. Take advantage of row blocking, by specifying the FOR READ ONLY, FOR FETCH ONLY, OPTIMIZE FOR n ROWS clause, or if you declare your cursor as SCROLLing. This will improve performance, and, in addition, improve concurrency because exclusive locks are never held on the rows retrieved.
Consider the use of APPEND MODE
Insert multiple rows with one INSERT statement
Disk
A database that would have taken 36 * 1 GB drives a number of years ago can now be placed on one disk. This highlights the database I/O problems. For example, if each 1 GB disk drive can do 80 I/O operations a second, this means the system can do a combined 36 * 80 = 2880 I/O operations per second. But a single 36 GB drive with a seek time of 7 ms can do only 140 I/O operations per second. While increased disk drive capacity is good news, the lower numbers of disks cannot deliver the same I/O throughput.
WLM Pressure Valve
Enable WLM for the correct USER and application name (replace
$initialconcurrency
):CREATE SERVICE CLASS SC1; CREATE WORKLOAD wl1 SESSION_USER('MYUSER') APPLNAME('db2jcc_application') SERVICE CLASS SC1; GRANT USAGE ON WORKLOAD wl1 TO PUBLIC; CREATE THRESHOLD "MYDB_ABNORMAL_WORKLOAD_CONCURRENCY" FOR SERVICE CLASS SC1 ENFORCEMENT DATABASE WHEN CONCURRENTDBCOORDACTIVITIES > $initialconcurrency COLLECT ACTIVITY DATA CONTINUE;
When needed, dynamically increase or reduce pressure by changing
CONCURRENTDBCOORDACTIVITIES
(replace$newvalue
):ALTER THRESHOLD "MYDB_ABNORMAL_WORKLOAD_CONCURRENCY" WHEN CONCURRENTDBCOORDACTIVITIES > $newvalue CONTINUE;
DB2 Configuration
Number of asynchronous page cleaners (NUM_IOCLEANERS) - This parameter controls the number of page cleaners that write changed pages from the buffer pool to disk. You may want to increase this to the number of physical disk drive devices you have. The default is 1.
Enable intra-partition parallelism (INTRA_PARALLEL) - if you have a multi-processor SMP system, setting this parameter to YES may improve performance. The default is NO
To optimize for INSERT speed at the possible expense of faster table growth, set the DB2MAXFSCRSEARCH registry variable to a small number. To optimize for space reuse at the possible expense of INSERT speed, set DB2MAXFSCRSEARCH to a larger number.
Snapshots
Collecting performance data introduces overhead on the operation of the database. DB2 provides monitor switches to control which information is collected. You can turn these switches on by using the following DB2 commands:
- UPDATE MONITOR SWITCHES USING BUFFERPOOL ON ;
- UPDATE MONITOR SWITCHES USING LOCK ON ;
- UPDATE MONITOR SWITCHES USING SORT ON ;
- UPDATE MONITOR SWITCHES USING STATEMENT ON ;
- UPDATE MONITOR SWITCHES USING TABLE ON ;
- UPDATE MONITOR SWITCHES USING UOW ON ;
You can access the data that the database manager maintains either by taking a snapshot or by using an event monitor.
Use the GET SNAPSHOT command to collect status information and format the output for your use. Some of the most useful options are:
- GET SNAPSHOT FOR DATABASE - Provides general statistics for one or more active databases on the current database partition.
- GET SNAPSHOT FOR APPLICATIONS - Provides information about one or more active applications that are connected to a database on the current database partition.
- GET SNAPSHOT FOR DATABASE MANAGER - Provides statistics for the active database manager instance.
- GET SNAPSHOT FOR LOCKS - Provides information about every lock held by one or more applications connected to a specified database.
- GET SNAPSHOT FOR BUFFERPOOLS - Provides information about buffer pool activity for the specified database.
- GET SNAPSHOT FOR DYNAMIC SQL - Returns a point-in-time picture of the contents of the SQL statement cache for the database.
db2batch
A benchmark tool called db2batch is provided in the sqllib/bin subdirectory of your DB2 installation. This tool can read SQL statements from either a flat file or standard input, dynamically describe and prepare the statements, and return an answer set.
IBM DB2 for z/OS
"First, ensure that your DB2 logs are large enough, are allocated on the fastest volumes you have, and make sure they have optimal CI sizes.
Next, ensure that you have tuned your bufferpools so that the most often-read data is in memory as much as possible. Use ESTOR and hyperpools.
You many want to consider pre-formatting tables that are going to be heavily used. This avoids formatting at runtime.
Ensuring DB2 Tracing Under the DB2 for z/OS Universal Driver is Turned Off:
If the db2.jcc.propertiesFile jvm property has been defined to specify a DB2 jcc properties file to the WebSphere Application Server for z/OS, ensure that the following trace statements in the file are commented out if they are specified:# jcc.override.traceFile=<file name>
# jcc.override.traceFile=<file name>If any of the DB2 Universal JDBC Driver datasources your applications are using are defined with a nonzero traceLevel custom property, use the WebSphere Application Server for z/OS Administrative console to set the traceLevel to zero.
Be sure to define indexes on all your object primary keys. Failure to do so will result in costly tablespace scans.
Ensure that, once your tables are sufficiently populated, you do a re-org to compact the tables. Running RUNSTATS will ensure that the DB2 catalog statistics about table and column sizes and accesses are most current so that the best access patterns are chosen by the optimizer.
Enable dynamic statement caching in DB2. To do this, modify your ZPARMS to say CACHEDYN(YES) MAXKEEPD(16K). Depending on the application, this can make a very significant improvement in DB2 performance. Specifically, it can help JDBC and LDAP query.
Increase DB2 checkpoint interval settings to a large value. To do this, modify your ZPARMS to include CHKFREQ=xxxxx, where xxxxx is set at a high value when doing benchmarks (e.g. CHKFREQ=16000000). On production systems there are other valid reasons to keep checkpoint frequencies lower, however."
DB2 on Linux on Power
Write I/O to the Transaction Log
Before a transaction is committed, it must be written to the transaction log. This can become a primary bottleneck. This can be lessened by isolating transaction logs.
Data Compression
If there is available CPU and I/O is the bottleneck, consider data compression with the DB2 Storage Optimization feature.
alter table <table_name> compress yes
alter index <index_name> compress yes
reorg table <table_name> RESETDICTIONARY
reorg indexes all for table <table_name>
runstats on table <table_name> with distribution and detailed indexes all allow read access
db2pd
db2pd -stack all
gathers stack traces of DB2: https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.admin.trb.doc/doc/c0054595.html
The files are written to DIAGPATH
(db2 get dbm cfg|grep DIAGPATH
).
Alternatively, specify homedir and timeout options to change the output directory:
db2pd -stack all dumpdir=~/db2stacks/ timeout=30