Checkpoints, logging, and page cleaning are necessary to maintain database consistency. A direct trade-off exists between the frequency of checkpoints or the size of the logical logs and the time that it takes to recover the database in the event of a failure. So a major consideration when you attempt to reduce the overhead for these activities is the delay that you can accept during recovery.
The following configuration parameters affect logging:
The LOGBUFF and PHYSBUFF configuration parameters affect logging I/O activity because they specify the respective sizes of the logical-log and physical-log buffers that are in shared memory. The size of these buffers determines how quickly they fill and therefore how often they need to be flushed to disk.
The LOGFILES parameter specifies the number of logical-log files.
If all your logical-log files are the same size, you can calculate the total space allocated to the logical-log files as follows:
total logical log space = LOGFILES * LOGSIZE
If you add logical-log files that are not the size specified by LOGSIZE, you cannot use the LOGFILES * LOGSIZE expression to calculate the size of the logical log. Instead, you need to add the sizes for each individual log file on disk.
Use the onstat -l utility to monitor logical-log files.
Use the LOGSIZE parameter to set the size of each logical log file. It is difficult to predict how much logical-log space your database server system requires until it is fully in use.
Choose a log size based on how much logging activity occurs and the amount of risk in case of catastrophic failure. If you cannot afford to lose more than an hour's worth of data, create many small log files that each hold an hour's worth of transactions. Turn on continuous-log backup. Small logical-log files fill sooner, which means more frequent logical-log backups.
If your system is stable with high logging activity, choose larger logs to improve performance. Continuous-log backups occur less frequently with large log files. Also consider the maximum transaction rates and speed of the backup devices. Do not let the whole logical log fill. Turn on continuous-log backup and leave enough room in the logical logs to handle the longest transactions.
The backup process can hinder transaction processing that involves data located on the same disk as the logical-log files. If enough logical-log disk space is available, however, you can wait for periods of low user activity before you back up the logical-log files.
You can use the following formula to obtain an initial estimate for LOGSIZE in kilobytes:
LOGSIZE = (connections * maxrows * rowsize) / 1024) / LOGFILES
To obtain a better estimate, execute the onstat -u command during peak activity periods. The last line of the onstat -u output contains the maximum number of concurrent connections.
You need to adjust the size of the logical log when your transactions include simple large objects or smart large objects, as the following sections describe.
You also can increase the amount of space devoted to the logical log by adding another logical-log file, as your IBM Informix: Dynamic Server Administrator's Guide explains.
To obtain better overall performance for applications that perform frequent updates of TEXT or BYTE data in blobspaces, reduce the size of the logical log. Blobpages cannot be reused until the logical log to which they are allocated is backed up. When TEXT or BYTE data activity is high, the performance impact of more frequent checkpoints is balanced by the higher availability of free blobpages.
When you use volatile blobpages in blobspaces, smaller logs can improve access to simple large objects that must be reused. Simple large objects cannot be reused until the log in which they are allocated is flushed to disk. In this case, you can justify the cost in performance because those smaller log files are backed up more frequently.
If you plan to log smart-large-object user data, you must ensure that the log size is considerably larger than the amount of data being written. Smart-large-object metadata is always logged even if the smart large objects are not logged.
Use the following guidelines when you log smart large objects:
The dynamic log file allocation feature prevents hangs caused by rollbacks of a long transaction because the database server does not run out of log space. Dynamic log allocation allows you to do the following actions:
The default value for the DYNAMIC_LOGS configuration parameter is 2, which means that the database server automatically allocates a new logical log file after the current log file when it detects that the next log file contains an open transaction. The database server automatically checks if the log after the current log still contains an open transaction at the following times:
Logical recovery happens at the end of fast recovery and at the end of a cold restore or roll forward. For more information on the phases of fast recovery, see your IBM Informix: Dynamic Server Administrator's Guide.
The database server also checks after this switch because it is writing log records for the rollback.
When you use the default value of 2 for DYNAMIC_LOGS, the database server determines the location and size of the new logical log for you:
If you want to control the location and size of the additional log file, set DYNAMIC_LOGS to 1. When the database server switches log files, it still checks if the next active log contains an open transaction. If it does find an open transaction in the next log to be active, it does the following actions:
You can write a script that will execute when alarm event 27 occurs to execute onparams -a -i with the location you want to use for the new log. Your script can also execute the onstat -d command to check for adequate space and execute the onparams -a -i command with the location that has enough space. You must use the -i option to add the new log right after the current log file.
If you set DYNAMIC_LOGS to 0, the database server still checks whether the next active log contains an open transaction when it switches log files. If it does find an open transaction in the next log to be active, it issues the following warning:
WARNING: The oldest logical log file (%d) contains records from an open transaction (0x%p), but the Dynamic Log Files feature is turned off.
With the dynamic log file feature, the long transaction high watermarks are no longer as critical as in previous versions prior to Version 9.3 because the database server does not run out of log space unless you use up the physical disk space available on the system. Therefore, in Version 9.4, LTXHWM and LTXEHWM are not in the onconfig.std file and default to the following values, depending on the value of the DYNAMIC_LOGS configuration parameter:
When you use the default value of 2 for DYNAMIC_LOGS or set DYNAMIC_LOGS to 1, the long transaction high watermark default values are:
LTXHWM 80 LTXEHWM 90
The default values for LTXHWM and LTXEHWM are higher than in previous versions of the database server because the database server adds logical logs without the need to restart. Because the database server does not run out of logs, other users can still access the log during the rollback of a long transaction.
If you set DYNAMIC_LOGS to 0, the default values are 50 for LTXHWM and 60 for LTXEHWM.
The LTXHWM parameter still indicates how full the logical log is when the database server starts to check for a possible long transaction and to roll it back. LTXEHWM still indicates the point at which the database server suspends new transaction activity to locate and roll back a long transaction. These events should be rare, but if they occur, they can indicate a serious problem within an application.
Under normal operations, use the default values for LTXHWM and LTXEHWM. However, you might want to change these default values for one of the following reasons:
In this case, you increase the value of LTXEHWM to raise the point at which the long transaction rollback has exclusive access to the log.
In this case, you increase the value of LTXHWM so that the transaction has a chance to complete before reaching the high watermark.