5 min read

If our Lighttpd runs on a multi-processor machine, it can take advantage of that by spawning multiple versions of itself. Also, most Lighttpd installations will not have a machine to themselves; therefore, we should not only measure the speed but also its resource usage.

Optimizing Compilers: gcc with the usual settings (-O2) already does quite a good job of creating a fast Lighttpd executable. However, -O3 may nudge the speed up a tiny little bit (or slow it down, depending on our system) at the cost of a bigger executable system. If there are optimizing compilers for our platform (for example, Intel and Sun Microsystems each have compilers that optimize for their CPUs), they might even give another tiny speed boost.

If we do not want to invest money in commercial compilers, but maximize on what gcc has to offer, we can use Acovea, which is an open source project that employs genetic algorithms and trial-and-error to find the best individual settings for gcc on our platform. Get it from http://www.coyotegulch.com/products/acovea/

Finally, optimization should stop where security (or, to a lesser extent, maintainability) is compromised. A slower web server that does what we want is way better than a fast web server obeying the commands of a script kiddie.

Before we optimize away blindly, we better have a way to measure the “speed”. A useful measure most administrators will agree with is “served requests per second”. http_load is a tool to measure the requests per second. We can get it from http://www.acme.com/software/http_load/.

http_load is very simple. Give it a site to request, and it will flood the site with requests, measuring how many are served in a given amount of time. This allows a very simplistic approach to optimizing Lighttpd: Tweak some settings, run http_load with a sufficient realistic scenario, and see if our Lighttpd handles more or less requests than before.

We do not yet know where to spend time optimizing. For this, we need to make use of timing log instrumentation that has been included with Lighttpd 1.5.0 or even use a profiler to see where the most time is spent. However, there are some “big knobs” to turn that can increase performance, where http_load will help us find a good setting.

Installing http_load

http_load can be downloaded as a source .tar file (which was named .tar.gz for me, though it is not gzipped). The version as of this writing is 12Mar2006. Unpack it to /usr/src (or another path by changing the /usr/src) with:

$ cd /usr/src && tar xf /path/to/http_load-12Mar2006.tar.gz
$ cd http_load-12Mar2006

We can optionally add SSL support. We may skip this if we do not need it.

To add SSL support we need to find out where the SSL libs and includes are. I assume they are in /usr/lib and /usr/include, respectively, but they may or may not be the same on your system. Additionally, there is a “SSL tree” directory that is usually in /usr/ssl or /usr/local/ssl and contains certificates, revocation lists, and so on. Open the Makefile with a text editor and look at line 11 to 14, which reads:

#SSL_TREE = /usr/local/ssl
#SSL_DEFS = -DUSE_SSL
#SSL_INC = -I$(SSL_TREE)/include
#SSL_LIBS = -L$(SSL_TREE)/lib -lssl -lcrypto

Change them to the following (assuming the given directories are correct):

SSL_TREE = /usr/ssl
SSL_DEFS = -DUSE_SSL
SSL_INC = -I/usr/include
SSL_LIBS = -L/usr/lib -lssl -lcrypto

Now compile and install http_loadwith the following command:

$ make all install

Now we’re all set to load-test our Lighttpd.

Running http_load tests

We just need a URL file, which contains URLs that lead to the pages our Lighttpd serves. http_load will then fetch these pages at random as long as, or as often as we ask it to. For example, we may have a front page with links to different articles. We can just start putting a link to our front page into the URL file, which we will name urls to get started; for example, http://localhost/index.html.

Note that the file just contains URLs, nothing less, nothing more (for example, http_load does not support blank lines). Now we can make our first test run:

$ http_load -parallel 10 -seconds 60 urls

This will run for one minute and try to open 10 connections per second. Let’s see if our Lighttpd keeps up:

343 fetches, 10 max parallel, 26814 bytes, in 60 seconds
78.1749 mean bytes/connection
5.71667 fetches/sec, 446.9 bytes/sec
msecs/connect: 290.847 mean, 9094 max,15 min
msecs/first-response: 181.902 mean, 9016 max, 15 min
HTTP response codes:
code 200 - 327

 

As we can see, it does. http_load needs one of the two start conditions and one of the two stop conditions plus a URL file to run. We can create the URL file manually or crawl our document root(s) with the following python script called crawl.py:


#!/usr/bin/python
#run from document root, pipe into URLs file. For example:
# /path/to/docroot$ crawl.py > urls
import os, re, sys
hostname = "http://localhost/"
for (root, dirs, files) in os.walk("."):
for name in files:
filepath = os.path.join(root, name)
print re.sub("./", hostname, filepath)

 

You can download the crawl.oy file from http://www.packtpub.com/files/code/2103_Code.zip.

Capture the output into a file to use as URL file. For example, start the script from within our document root with:


$ python crawl.py > urls

This will give us a urls file, which will make http_load try to get all files (given that we have specified enough requests). Then we can start http_load as discussed in the preceding example. http_load takes the following options:

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here