| ||||||||||||||||||||||||||||||||||||||
As traffic increases, web sites need to add additional web servers and servlet engines. Distributing the traffic across the servers and coping when a server restarts is the challenge of load balancing.
In general, the hardware load-balancer will have the best results while using the Resin or Apache/IIS is a low-cost alternative for medium sites.
Sites with a hardware load balancer will generally put one Resin JVM on each server and configure the load balancer to distribute the load across those JVMs. Although it's possible to configure Resin with Apache/IIS in this configuration, it's not necessary and running Resin as the web server reduces the configuration complexity. The IP-based sticky sessions provided by hardware load balancers should be used to increase efficiency. IP-based sticky sessions cause the hardware load balancer to use the same server for each request from a certain IP. The IP-sessions will usually send the request to the right server, but there are clients behind firewalls and proxies which will have different IPs for each request even though the session is the same. IP-sessions are only mostly sticky. Sites using sessions will configure distributed sessions to make sure the users see the same session values. A typical configuration will use the same resin.conf for all servers and use the -server flag to start the correct one on each machine:
On Unix, the servers will generally be started using a startup script . Each server will have a different value for -server and for -pid.
On Windows, each server is installed as a service.
Resin includes a LoadBalanceServlet that can balance requests to backend servers. Because it is implemented as a servlet, this configuration is the most flexible. A site might use 192.168.0.1 as the frontend load balancer, and send all requests for /foo to the backend host 192.168.0.10 and all requests to /bar to the backend host 192.168.0.11. Since Resin has an integrated HTTP proxy cache, the web-tier machine can cache results for the backend servers. Using Resin as the load balancing web server requires a minimum of two configuration files: one for the load balancing server, and one for the backend servers. The front configuration will dispatch to the backend servers, while the backend will actually serve the requests. The web-tier server does the load balancingIn the following example, there are three servers and two conf files. The first server (192.168.0.1), which uses web-tier.conf, is the load balancer. It has an <http> listener, it receives requests from browsers, and dispatches them to the backend servers (192.168.0.10 and 192.168.0.11).
The srun entries are included in web-tier.conf so that the LoadBalanceServlet knows where to find the backend servers. The LoadBalanceServlet selects a backend server using a round-robin policy. Although the round-robin policy is simple, in practice it is as effective as complicated balancing policies. In addition, because it's simple, round-robin is more robust and faster than adaptive policies. The backend server respond to the requestsA seperate conf file is used to configure all of the backend servers. In this case, there are two backend servers, both configured in the conf file app-tier.conf. Sites using sessions will configure distributed sessions to make sure the users see the same session values.
Starting the servers
When using Apache or IIS as the webserver, the plugin does the load balancing. It performs the functions of the hardware load balancer or LoadBalanceServlet in the scenarios descriubed above. To understand how Resin's load balancing works with plugins, it's important to review how the plugin dispatches requests to the backend JVM. The following sequence describes a typical request:
The plugin needs to know which requests should go to Resin, i.e. the servlet-mappings and the jsp files. And it needs to know the TCP host/port names of the backend machines, i.e. the <srun> tags. /caucho-status shows all that information in one table. The plugin obtains this information from a running Resin server. The plugin controls the load balancing since it needs to decide which JVM to use. Because the plugin is key in load-balancing, looking at the /caucho-status will tell you exactly how your system is configured. The JVMs are just passive, waiting for the next request. From the JVM-perspective, a request from a plugin is identical to an HTTP request, except it uses a slightly different encoding. In fact the same JVM can serve as an srun and as an httpd server listening to port 8080, for example. The dual srun/http configuration can be useful for debugging.
A session needs to stay on the same JVM that started it. Otherwise, each JVM would only see every second or third request and get confused. To make sure that sessions stay on the same JVM, Resin encodes the cookie with the host number. In the previous example, the hosts would generate cookies like:
On the web server, mod_caucho will decode the cookie and send it to the appropriate host. So would go to host2.In the infrequent case that host2 fails, Resin will send the request to host3. The user might lose the session but that's a minor problem compared to showing a connection failure error. To save sessions, you'll need to use distributed sessions. Also take a look at tcp sessions. The following example is a typical configuration for a distributed server using an external hardware load-balancer, i.e. where each Resin is acting as the HTTP server. Each server will be started as or to grab its specific configuration.In this example, sessions will only be stored when the server shuts down, either for maintenance or with a new version of the server. This is the most lightweight configuration, and doesn't affect performance significantly. If the hardware or the JVM crashes, however, the sessions will be lost. (If you want to save sessions for hardware or JVM crashes, remove the <save-only-on-shutdown/> flag.)
Many larger sites like to use multiple web servers with a JVM and a web server on each machine. A router will distribute the load between the machines. In this configuration, the site needs to take control of its own sessions. Because the router will distribute the load randomly, any persistent session state needs to be handled by a centralized server like a database or use Resin's cluster storage. Even in this configuration, you can use Resin's load balancing to increase reliability. Each web server should choose its own JVM first, but use another machine as a backup. In this case, you can use the trick that refers to the preferred host. The configuration would look like:
Alternately, if you're using Apache, you could configure the sruns in the httpd.conf.
The order must be consistent for all servers so sessions will always go to the correct machine. must always go to host2.
Multiple web servers can use the same JVM. For example, a fast plain webserver and an SSL web server may only need a single JVM. (Although a backup would be good.) Since the JVM doesn't care where the request comes from, it can treat each request identically. This simplifies SSL development. A servlet just needs to check the method to see if the request is SSL or not. Other than that, all requests are handled identically.
|