Home | Previous Page | Next Page   Planning for Good Performance > Determining Performance Goals >

Working With Available Hardware Resources

Three primary hardware components make up a database server system:

This section provides some general information and suggestions about configuration and tuning of these hardware components as background information for the database server configuration guidelines.

Important:
Information in this section is not comprehensive; it is intended only to help you start to analyses your system for database server use. Although hardware decisions might be made for you by the Information Services (IS) department, this information can help you discuss hardware configuration issues with IS.

Memory Resources

It is hard to tune system memory, but relatively easy to find out if it is a limiting resource. For information about determining whether memory is a limiting resource in a database server and monitoring large users of memory, Monitoring Table and Fragment Use.

Additional memory does not help a compute-bound system. Database servers store accessed data in memory to avoid disk reads and writes. If you have an I/O or processing problem, adding memory does not improve performance.

The database server provides several features that help you make sure that memory is used efficiently by queries and transactions.

I/O Resources

For good I/O performance, plan for the workload you expect and understand the difference between bandwidth and throughput. Increasing bandwidth maximizes the number of megabytes of data processed per second at the cost of the number of reads and writes per second (I/O). Increasing throughput maximizes the number of disk reads and writes per second at the cost of megabytes per second.

DSS workloads are bandwidth intensive, while Web and OLTP workloads are throughput intensive. For example:

Although you probably cannot provide peak performance for both kinds of database applications, analysis of client application patterns can help you decide on a reasonable compromise. Consider trade-offs carefully, however.

When you set up disks for Extended Parallel Server database servers, consider the following suggestions:

For information about monitoring and adjusting I/O, see Tuning I/O for Tables and Indexes.

Disk Arrays

Most large database systems use disk arrays that allow hardware mirroring and transparent failover for disk failures. Use of disk arrays means, however, that it is no longer possible to place a table at a specific offset on a specific physical disk or even to identify the physical disk where a table fragment resides.

To understand how to use RAID and fragmentation, consider the following RAID terms:

The database server cannot distinguish raw disk allocations on RAID virtual disks from allocations on traditional disks. The system administrator and database administrator must work together to ensure that logical volumes defined for use by the database server are properly aligned across the physical disk members.

The simplest method is to specify a volume size that is a multiple of the RAID block size, and the starting address of the volume is a multiple of the block size.

Figure 1. Logical Volumes of Stripe Sets in a RAID System
begin figure description - This figure is described in the surrounding text. - end figure description

For example, in the RAID system shown in Figure 1, all logical units (LUN) or volumes are made up of blocks of the same size, so that the offset of each block in the volume is a multiple of the chunk size. To take advantage of I/O efficiency and fragment elimination, create chunks for database server storage spaces so that they match the RAID block size. Use the appropriate multiple of the block size as the offset into the logical volume. For information about creating chunks for storage spaces, see Planning Storage Spaces to Support Fragmentation Schemes.

When placing table fragments on RAID devices, consider the logical unit level instead of the physical level. Place fragments of the same table on separate logical units, and also place fragments of tables that are commonly joined on separate logical units.

Optimizing I/O Resources

To maximize I/O throughput, try to achieve the following goals:

CPU Resources

Although you cannot tune a CPU, you can maximize the efficiency of CPUs in the following ways:

Optimal Number of CPUs per Coserver

Configure a database server in which CPUs are divided optimally among coservers.

The database server is designed to use coservers to encapsulate functionality for efficiency. If you have a large SMP system, benchmarks and customer experience generally suggest creating more than one coserver. For DSS applications, an optimal number of CPUs for each coserver is between four and eight. For an SMP system with 24 CPUs, for example, you will probably get the best performance if you create three or four coservers. However, for OLTP applications, you might create only one or two large coservers.

Optimizing CPU Use

Because optimization of other resources usually increases the CPU load, examine CPU use after other resource use is tuned. Focus on maximizing CPU use in the following ways:

Use the UNIX sar utility to report CPU utilization information. CPU utilization is the proportion of time that the CPU is busy in either user or system state. Generally, the CPU is idle only when there is no work to be performed or all processes are waiting for I/O resources to complete operations. CPU utilization is the CPU busy time divided by the elapsed time. If the CPU is busy 90 seconds over a 100 second interval, the CPU utilization is 90%.

For information about monitoring compute resources, see your operating system documentation.

Home | [ Top of Page | Previous Page | Next Page | Contents | Index ]