<-
Apache > HTTP Server > Documentation > Version 2.2 > Modules

Apache Module mod_proxy_balancer

Description:mod_proxy extension for load balancing
Status:Extension
Module Identifier:proxy_balancer_module
Source File:mod_proxy_balancer.c
Compatibility:Available in version 2.1 and later

Summary

This module requires the service of mod_proxy. It provides load balancing support for HTTP, FTP and AJP13 protocols

Thus, in order to get the ability of load balancing, mod_proxy and mod_proxy_balancer have to be present in the server.

No support for mod_proxy_balancer in IBM HTTP Server

mod_proxy_balancer is not a supported component of IBM HTTP Server, however on select platforms this module is distributed with IHS in the modules/WebSphereCE/ subdirectory. This module is supported exclusively by WebSphere Community Edition support for WebSphere Community Edition customers and by WebSphere Commerce when used as a part of Social Commerce.

Warning

Do not enable proxying until you have secured your server. Open proxy servers are dangerous both to your network and to the Internet at large.

Directives

This module provides no directives.

Topics

See also

top

Load balancer scheduler algorithm

At present, there are 3 load balancer scheduler algorithms available for use: Request Counting, Weighted Traffic Counting and after IHS 7.0.0.9, Pending Request Counting. These are controlled via the lbmethod value of the Balancer definition. See the ProxyPass directive for more information.

top

Example of a balancer configuration

Before we dive into the technical details, here's an example of how you might use mod_proxy_balancer to provide load balancing between two back-end servers:

<Proxy balancer://mycluster>
BalancerMember http://192.168.1.50:80
BalancerMember http://192.168.1.51:80
</Proxy>
ProxyPass /test balancer://mycluster/

top

Request Counting Algorithm

Enabled via lbmethod=byrequests, the idea behind this scheduler is that we distribute the requests among the various workers to ensure that each gets their configured share of the number of requests. It works as follows:

lbfactor is how much we expect this worker to work, or the workers's work quota. This is a normalized value representing their "share" of the amount of work to be done.

lbstatus is how urgent this worker has to work to fulfill its quota of work.

The worker is a member of the load balancer, usually a remote host serving one of the supported protocols.

We distribute each worker's work quota to the worker, and then look which of them needs to work most urgently (biggest lbstatus). This worker is then selected for work, and its lbstatus reduced by the total work quota we distributed to all workers. Thus the sum of all lbstatus does not change(*) and we distribute the requests as desired.

If some workers are disabled, the others will still be scheduled correctly.

for each worker in workers
    worker lbstatus += worker lbfactor
    total factor    += worker lbfactor
    if worker lbstatus > candidate lbstatus
        candidate = worker

candidate lbstatus -= total factor

If a balancer is configured as follows:

worker a b c d
lbfactor 25 25 25 25
lbstatus 0 0 0 0

And b gets disabled, the following schedule is produced:

worker a b c d
lbstatus -50 0 25 25
lbstatus -25 0 -25 50
lbstatus 0 0 0 0
(repeat)

That is it schedules: a c d a c d a c d ... Please note that:

worker a b c d
lbfactor 25 25 25 25

Has the exact same behavior as:

worker a b c d
lbfactor 1 1 1 1

This is because all values of lbfactor are normalized with respect to the others. For:

worker a b c
lbfactor 1 4 1

worker b will, on average, get 4 times the requests that a and c will.

The following asymmetric configuration works as one would expect:

worker a b
lbfactor 70 30
 
lbstatus -30 30
lbstatus 40 -40
lbstatus 10 -10
lbstatus -20 20
lbstatus -50 50
lbstatus 20 -20
lbstatus -10 10
lbstatus -40 40
lbstatus 30 -30
lbstatus 0 0
(repeat)

That is after 10 schedules, the schedule repeats and 7 a are selected with 3 b interspersed.

top

Weighted Traffic Counting Algorithm

Enabled via lbmethod=bytraffic, the idea behind this scheduler is very similar to the Request Counting method, with the following changes:

lbfactor is how much traffic, in bytes, we want this worker to handle. This is also a normalized value representing their "share" of the amount of work to be done, but instead of simply counting the number of requests, we take into account the amount of traffic this worker has seen.

If a balancer is configured as follows:

worker a b c
lbfactor 1 2 1

Then we mean that we want b to process twice the amount of bytes than a or c should. It does not necessarily mean that b would handle twice as many requests, but it would process twice the I/O. Thus, the size of the request and response are applied to the weighting and selection algorithm.

top

Pending Request Counting Algorithm

Note

Support for lbmethod=bybusyness was added in IHS 7.0.0.9

Enabled via lbmethod=bybusyness, this scheduler keeps track of how many requests each worker is assigned at present. A new request is automatically assigned to the worker with the lowest number of active requests. This is useful in the case of workers that queue incoming requests independently of Apache, to ensure that queue length stays even and a request is always given to the worker most likely to service it fastest.

In the case of multiple least-busy workers, the statistics (and weightings) used by the Request Counting method are used to break the tie. Over time, the distribution of work will come to resemble that characteristic of byrequests.

top

Exported Environment Variables

At present there are 6 environment variables exported:

BALANCER_SESSION_STICKY

This is assigned the stickysession value used in the current request. It is the cookie or parameter name used for sticky sessions

BALANCER_SESSION_ROUTE

This is assigned the route parsed from the current request.

BALANCER_NAME

This is assigned the name of the balancer used for the current request. The value is something like balancer://foo.

BALANCER_WORKER_NAME

This is assigned the name of the worker used for the current request. The value is something like http://hostA:1234.

BALANCER_WORKER_ROUTE

This is assigned the route of the worker that will be used for the current request.

BALANCER_ROUTE_CHANGED

This is set to 1 if the session route does not match the worker route (BALANCER_SESSION_ROUTE != BALANCER_WORKER_ROUTE) or the session does not yet have an established route. This can be used to determine when/if the client needs to be sent an updated route when sticky sessions are used.

top

Enabling Balancer Manager Support

This module requires the service of mod_status. Balancer manager enables dynamic update of balancer members. You can use balancer manager to change the balance factor or a particular member, or put it in the off line mode.

Thus, in order to get the ability of load balancer management, mod_status and mod_proxy_balancer have to be present in the server.

To enable load balancer management for browsers from the example.com domain add this code to your httpd.conf configuration file

<Location /balancer-manager>
SetHandler balancer-manager

Order Deny,Allow
Deny from all
Allow from .example.com
</Location>

You can now access load balancer manager by using a Web browser to access the page http://your.server.name/balancer-manager