Enterprise Edition Home | Express Edition Home | Previous Page | Next Page   Effect of Configuration on I/O Activity > Background I/O Activities >

Configuration Parameters That Affect Logging

Checkpoints, logging, and page cleaning are necessary to maintain database consistency. A direct trade-off exists between the frequency of checkpoints or the size of the logical logs and the time that it takes to recover the database in the event of a failure. So a major consideration when you attempt to reduce the overhead for these activities is the delay that you can accept during recovery.

The following configuration parameters affect logging:

LOGBUFF and PHYSBUFF

The LOGBUFF and PHYSBUFF configuration parameters affect logging I/O activity because they specify the respective sizes of the logical-log and physical-log buffers that are in shared memory. The size of these buffers determines how quickly they fill and therefore how often they need to be flushed to disk.

LOGFILES

The LOGFILES parameter specifies the number of logical-log files.

Determining the Number of Logical Log Files

If all your logical-log files are the same size, you can calculate the total space allocated to the logical-log files as follows:

total logical log space = LOGFILES * LOGSIZE

If you add logical-log files that are not the size specified by LOGSIZE, you cannot use the LOGFILES * LOGSIZE expression to calculate the size of the logical log. Instead, you need to add the sizes for each individual log file on disk.

Use the onstat -l utility to monitor logical-log files.

LOGSIZE

Use the LOGSIZE parameter to set the size of each logical log file. It is difficult to predict how much logical-log space your database server system requires until it is fully in use.

Choose a log size based on how much logging activity occurs and the amount of risk in case of catastrophic failure. If you cannot afford to lose more than an hour's worth of data, create many small log files that each hold an hour's worth of transactions. Turn on continuous-log backup. Small logical-log files fill sooner, which means more frequent logical-log backups.

If your system is stable with high logging activity, choose larger logs to improve performance. Continuous-log backups occur less frequently with large log files. Also consider the maximum transaction rates and speed of the backup devices. Do not let the whole logical log fill. Turn on continuous-log backup and leave enough room in the logical logs to handle the longest transactions.

The backup process can hinder transaction processing that involves data located on the same disk as the logical-log files. If enough logical-log disk space is available, however, you can wait for periods of low user activity before you back up the logical-log files.

Estimating Logical-Log Size When Logging Dbspaces

You can use the following formula to obtain an initial estimate for LOGSIZE in kilobytes:

LOGSIZE = (connections * maxrows * rowsize) / 1024) / LOGFILES 
connections
is the maximum number of connections for all network types specified in the sqlhosts file or registry by one or more NETTYPE parameters. If you configured more than one connection by setting multiple NETTYPE configuration parameters in your configuration file, sum the users fields for each NETTYPE parameter, and substitute this total for connections in the preceding formula.
maxrows
is the largest number of rows to be updated in a single transaction.
rowsize
is the average size of a row in bytes. You can calculate rowsize by adding up the length (from the syscolumns system catalog table) of the columns in a row.
1024
is a necessary divisor because you specify LOGSIZE in kilobytes.

To obtain a better estimate, execute the onstat -u command during peak activity periods. The last line of the onstat -u output contains the maximum number of concurrent connections.

You need to adjust the size of the logical log when your transactions include simple large objects or smart large objects, as the following sections describe.

You also can increase the amount of space devoted to the logical log by adding another logical-log file, as your IBM Informix: Dynamic Server Administrator's Guide explains.

Estimating the Logical-Log Size When Logging Simple Large Objects

To obtain better overall performance for applications that perform frequent updates of TEXT or BYTE data in blobspaces, reduce the size of the logical log. Blobpages cannot be reused until the logical log to which they are allocated is backed up. When TEXT or BYTE data activity is high, the performance impact of more frequent checkpoints is balanced by the higher availability of free blobpages.

When you use volatile blobpages in blobspaces, smaller logs can improve access to simple large objects that must be reused. Simple large objects cannot be reused until the log in which they are allocated is flushed to disk. In this case, you can justify the cost in performance because those smaller log files are backed up more frequently.

Estimating Logical-Log Size When Logging Smart Large Objects (IDS)

If you plan to log smart-large-object user data, you must ensure that the log size is considerably larger than the amount of data being written. Smart-large-object metadata is always logged even if the smart large objects are not logged.

Use the following guidelines when you log smart large objects:

DYNAMIC_LOGS

The dynamic log file allocation feature prevents hangs caused by rollbacks of a long transaction because the database server does not run out of log space. Dynamic log allocation allows you to do the following actions:

The default value for the DYNAMIC_LOGS configuration parameter is 2, which means that the database server automatically allocates a new logical log file after the current log file when it detects that the next log file contains an open transaction. The database server automatically checks if the log after the current log still contains an open transaction at the following times:

When you use the default value of 2 for DYNAMIC_LOGS, the database server determines the location and size of the new logical log for you:

If you want to control the location and size of the additional log file, set DYNAMIC_LOGS to 1. When the database server switches log files, it still checks if the next active log contains an open transaction. If it does find an open transaction in the next log to be active, it does the following actions:

You can write a script that will execute when alarm event 27 occurs to execute onparams -a -i with the location you want to use for the new log. Your script can also execute the onstat -d command to check for adequate space and execute the onparams -a -i command with the location that has enough space. You must use the -i option to add the new log right after the current log file.

If you set DYNAMIC_LOGS to 0, the database server still checks whether the next active log contains an open transaction when it switches log files. If it does find an open transaction in the next log to be active, it issues the following warning:

WARNING: The oldest logical log file (%d) contains records
from an open transaction (0x%p), but the Dynamic Log 
Files feature is turned off.

LTXHWM and LTXEHWM

With the dynamic log file feature, the long transaction high watermarks are no longer as critical as in previous versions prior to Version 9.3 because the database server does not run out of log space unless you use up the physical disk space available on the system. Therefore, in Version 9.4, LTXHWM and LTXEHWM are not in the onconfig.std file and default to the following values, depending on the value of the DYNAMIC_LOGS configuration parameter:

The LTXHWM parameter still indicates how full the logical log is when the database server starts to check for a possible long transaction and to roll it back. LTXEHWM still indicates the point at which the database server suspends new transaction activity to locate and roll back a long transaction. These events should be rare, but if they occur, they can indicate a serious problem within an application.

Important:
It is recommended that you keep these default values for LTXHWM and LTXEHWM.

Under normal operations, use the default values for LTXHWM and LTXEHWM. However, you might want to change these default values for one of the following reasons:

Enterprise Edition Home | Express Edition Home | [ Top of Page | Previous Page | Next Page | Contents | Index ]