A long transaction is a transaction that has not completed by the time it has spanned a percentage of the total logical log space represented by the LTXHWM parameter. As soon as the server recgnizes that a transaction has crossed its high water mark, it forces the transaction to roll back. Because the rollback also consumes logical log space, it is theoretically possible for the transaction to consume all available log space without completing the rollback. If that happens, the server will hang; it is unable to switch to the next log file because that file contains records from an open transaction. At this point the only way to free the server is to shut down and restart it with DYNAMIC_LOGS set to either 1, 2, or 3.
To prevent long transactions hangs, take the following precautions:
The DYNAMIC_LOGS configuration parameter allows the server to dynamically expand the logical log file. When combined with proper configuration of the long transaction high water marks, this feature eliminates long transaction hangs. The following sections describe the possible values.
When the DYNAMIC_LOGS configuration parameter is set to 3, each time the server switches to the next log file, it checks to see whether the next active log contains records from an open transaction. If an open transaction is found, the server will automatically add a log file between the current and next log file. It then triggers an alarm and writes a message to the log.
The server automatically determines the dbspace for the log file using the following order of precedence:
The server searches for an extent large enough for the new log file by averaging the coservers largest and smallest log files. If contiguous space for a log file this size cannot be found using the order of precedence above, the size is halved and the server searches again. The minimum log file size is 200 kilobytes.
If the server cannot find 200 kilobytes of contiguous memory, an alarm is triggered requesting that a chunk or dbspace be added.
When the DYNAMIC_LOGS configuration parameters is set to 2, the log behavior is the same as fully automatic mode, except that log files are not automatically added to non-critical dbspaces. In this mode, the server will use all available space in existing critical dbspaces first. If no space is available in critical dbspaces, the server is blocked until a log file is added manually, or sufficient space is added to a critical dbspace.
This prevents non-critical dbspaces being converted to critical dbspaces accidentally.
With DYNAMIC_LOGS set to 1, when the server detects that a new logical log file is needed, an alarm is triggered and an error message is written to the log requesting a new log file. The server waits until a DBA adds a new log file manually.
The new log file must be positioned in the log list between the current and next log file. This is done using the INLINE option of the onutil CREATE LOG command. For more information on using this command to add logical logs, see the Administrator's Reference.
When this configuration parameter is in manual mode, dynamic logs can still be automated using a custom alarm program. For more information on using custom alarm programs, see the Administrator's Reference.
With DYNAMIC_LOGS set to 0, when the server detects that a new logical log file is needed, an alarm is triggered, but the server does not add a log file automatically or wait for one to be added manually.
Several factors influence how fast the logical log fills. It is difficult to know exactly which factor is the most important for a given instance of the database server, so you need to use your own judgment to estimate how quickly your logical log fills and how to prevent long-transaction conditions. Consider these factors:
A smaller logical log fills faster than a larger logical log. If you need to make the logical log larger, you can add another logical-log file, as explained in Adding a Logical-Log File or Logslice.
The more logical-log records written to the logical log, the faster it fills. If databases that your database server manages use transaction logging, transactions against those databases fill the logical log faster than transactions against databases without transaction logging.
When you use logging tables (STANDARD or OPERATIONAL), the logical log fills faster than when you use nonlogging tables. For more information, refer to Logging and Nonlogging Tables.
As explained in Unbuffered Transaction Logging, databases that use unbuffered transaction logging fill the logical log faster than databases that use buffered transaction logging.
The sizes of the logical-log records vary, depending on both the processing operation and the database server environment. In general, the longer the data rows, the larger the logical-log records. The logical log contains images of rows that have been inserted, updated, or deleted. Also, updates can use up to twice as much space as inserts and deletes because they might contain both before-images and after-images. Inserts store only the after-image and deletes store only the before-image.
The frequency of rollbacks affects the rate at which the logical log fills. More rollbacks fill the logical log faster. The rollbacks themselves require logical-log file space although the rollback records are small. In addition, rollbacks increase the activity in the logical log.
Several factors influence when transactions close. Be aware of these factors so that you can prevent long-transaction problems:
The duration of a transaction might be beyond your control. For example, a client that does not write many logical-log records might cause a long transaction if the users permit transactions to remain open for long periods of time. (For example, a user who is running an interactive application might leave a terminal to go to lunch part of the way through a transaction.)
The larger the logical-log space, the longer a transaction can remain open without a long-transaction condition developing. However, a large logical log by itself does not ensure that long transactions do not develop. Application designers should consider the transaction-duration issue, and users should be aware that leaving transactions open can be detrimental.
The amount of CPU activity can affect the ability of the database server to complete the transaction. Repeated writes to the logical-log file increase the amount of CPU time that the database server needs to complete the transaction. Increased logical-log activity can imply increased contention of logical-log locks and latches as well.
The database server alters processing at two critical points to manage the long-transaction condition. To tune both points, you can set values in the ONCONFIG file with the server off-line, or you can set them with the onutil SET command when the server is quiescent or on-line.
The first critical point is the long-transaction high-water mark (LTXHWM). When a single transaction spans a percentage of the total logical log space represented by LTXHWM, the database server directs the thread to begin to roll back the transaction. More than one transaction can be rolled back if more than one long transaction exists.
The transaction rollback itself generates logical-log records, however, and as other processes continue writing to the logical-log file, the logical log continues to fill.
The second critical point is the exclusive-access, long-transaction high-water mark (LTXEHWM). When the aborting long transaction has spanned LTXEHWM percent of the total log space, the database server dramatically reduces log-record generation. Most threads are denied access to the logical log. Only threads that are currently rolling back transactions (including the long transaction) and threads that are currently writing COMMIT records are allowed access to the logical log. Restricting access to the logical log preserves as much space as possible for rollback records that are being written by the user threads that are rolling back transactions.
If the long transactions cannot be rolled back before all logical log space is consumed, the database server will hang. This situation can only occur if the Dynamic Log Files feature is in Dormant Mode (DYNAMIC_LOGS == 0). If a long transaction hang does occur, you must shut down the server, set the DYNAMIC_LOGS parameter to either 1, 2, or 3, and restart the server.. Once the server is on-line again, consider lowering your configured values for LTXHWM and LTXEHWM, and running your server with the Dynamic Log Files feature turned on.
The default values for the configuration parameters LTXHWM and LTXEHWM are 50 and 60, respectively. These values should eliminate the risk of a long transaction having too little log space in which to roll back. The database server initialization emits a warning if your ONCONFIG file contains values greater than 50 and 60 for these parameters and the Dynamic Log Files feature is in Dormant Mode. To overcome these warnings, reduce your high water mark parameters to 50 and 60, respectively, or set DYNAMIC_LOGS to 1, 2, or 3. If your log space is finely tuned such that your LTXHWM percentage represents precisely what your longest transaction requires, you will need to add an amount to your log space equal to the difference between your current LTXHWM value and the recommended value of 50.
For more information on LTXHWM, LTXEHWM, and DYNAMIC LOGS, see the chapter on configuration parameters in the IBM Informix: Administrator's Reference.
Home | [ Top of Page | Previous Page | Next Page | Contents | Index ]