Home | Previous Page | Next Page   The Database Server > Introducing the Database Server > Extended Parallel Server >

High Performance

The database server achieves high performance through the following mechanisms:

The following sections explain each of these mechanisms.

Dynamic Shared-Memory Management

All applications that use a single instance of a database server share data in the memory space of the database server. After one application reads data from a table, other applications can access whatever data is already in memory. This sharing of data in memory prevents redundant disk I/O and the corresponding degradation in performance that might otherwise occur.

Database server shared memory contains both data from the database and control information. Because the data needed by various applications is located in a single, shared portion of memory, all control information needed to manage access to that data can be located in the same place. The database server adds memory dynamically as needed, and as the administrator, you can also add segments to shared memory if necessary. For information about adding a segment to shared memory, refer to Managing Shared Memory.

Direct Disk Access

The database server uses direct, or unbuffered, disk access to improve the speed and reliability of disk I/O operations. When you assign disk space to the database server, you can bypass the file-buffering mechanism that the operating system provides. The database server itself manages the data transfers between disk and memory.

UNIX provides unbuffered disk access by means of character-special devices (also known as raw disk devices). For more information about character-special devices, refer to your UNIX documentation.

When you store tables on raw disks or unbuffered files, the database server can manage the physical organization of data and minimize disk I/O. When you store tables in this manner, you can receive the following performance advantages:

If performance is not a primary concern, you can configure the database server to store data in regular (buffered) operating-system files, which are also known as cooked files. When the database server uses cooked files, it manages the file contents, but the operating system manages the disk I/O.

For more information about how the database server uses disk space, see Data Storage.

Dynamic Thread Allocation

The database server supports multiple client applications using a relatively small number of processes called virtual processors. A virtual processor is a multithreaded process that can serve multiple clients and, where necessary, run multiple threads to work in parallel for a single query. In this way, the flexible database server architecture provides dynamic load balancing for both online transaction processing (OLTP) and decision-support applications.

For a description of database server threads, refer to Virtual Processors and Threads.

Fragmentation and Parallelism

The database server uses table partitioning (also called fragmentation) to intelligently distribute tables across disks for better performance. For very large databases (VLDBs), the ability to fragment data is important to manage the data efficiently.

The database server can allocate multiple threads to work in parallel on a single query. This feature is known as parallel database query (PDQ).

PDQ is most effective when you use it with fragmentation. For an overview of fragmentation and PDQ, refer to Table Fragmentation and PDQ.

Home | [ Top of Page | Previous Page | Next Page | Contents | Index ]