11 min read

 

Squid Proxy Server 3.1: Beginner’s Guide

Squid Proxy Server 3.1: Beginner's Guide Improve the performance of your network using the caching and access control capabilities of Squid
        Read more about this book      

Whether you only run one site, or are in charge of a whole network, Squid is an invaluable tool which improves performance immeasurably. Caching and performance optimization usually requires a lot of work on the developer’s part, but Squid does all that for you. In this article we will learn to fine-tune our cache to achieve a better HIT ratio to save bandwidth and reduce the average page load time.

In this article by Kulbir Saini, author of Squid Proxy Server 3 Beginners Guide, we will take a look at the following:

  • Cache peers or neighbors
  • Caching the web documents in the main memory and hard disk
  • Tuning Squid to enhance bandwidth savings and reduce latency

(For more resources on Proxy Servers, see here.)

Cache peers or neighbors

Cache peers or neighbors are the other proxy servers with which our Squid proxy server can:

  • Share its cache with to reduce bandwidth usage and access time
  • Use it as a parent or sibling proxy server to satisfy its clients’ requests
  • Use it as a parent or sibling proxy server

We normally deploy more than one proxy server in the same network to share the load of a single server for better performance. The proxy servers can use each other’s cache to retrieve the cached web documents locally to improve performance. Let’s have a brief look at the directives provided by Squid for communication among different cache peers.

Declaring cache peers

The directive cache_peer is used to tell Squid about proxy servers in our neighborhood. Let’s have a quick look at the syntax for this directive:

cache_peer HOSTNAME_OR_IP_ADDRESS TYPE PROXY_PORT ICP_PORT [OPTIONS]

In this code, HOSTNAME_OR_IP_ADDRESS is the hostname or IP address of the target proxy server or cache peer. TYPE specifies the type of the proxy server, which in turn, determines how that proxy server will be used by our proxy server. The other proxy servers can be used as a parent, sibling, or a member of a multicast group.

Time for action – adding a cache peer

Let’s add a proxy server (parent.example.com) that will act as a parent proxy to our proxy server:

cache_peer parent.example.com parent 3128 3130 default proxy-only

3130 is the standard ICP port. If the other proxy server is not using the standard ICP port, we should change the code accordingly. This code will direct Squid to use parent.example.com as a proxy server to satisfy client requests in case it’s not able to do so itself.

The option default specifies that this cache peer should be used as a last resort in the scenario where other peers can’t be contacted. The option proxy-only specifies that the content fetched using this peer should not be cached locally. This is helpful when we don’t want to replicate cached web documents, especially when the two peers are connected with a high bandwidth backbone.

What just happened?

We added parent.example.com as a cache peer or parent proxy to our Squid proxy server. We also used the option proxy-only, which means the requests fetched using this cache peer will not be cached on our proxy server.

There are several other options in which you can add cache peers, for various purposes, such as, a hierarchy.

Quickly restricting access to domains using peers

If we have added a few proxy servers as cache peers to our Squid server, we may have the desire to have a little bit of control over the requests being forwarded to the peers. The directive cache_peer_domain is a quick way to achieve the desired control. The syntax of this directive is quite simple:

cache_peer_domain CACHE_PEER_HOSTNAME [!]DOMAIN1 [[!]DOMAIN2 ...]

In the code, CACHE_PEER_HOSTNAME is the hostname or IP address of the cache peer being used when declaring it as a cache peer, using the cache_peer directive. We can specify any number of domains which may be fetched through this cache peer. Adding a bang (!) as a prefix to the domain name will prevent the use of this cache peer for that particular domain.

Let’s say we want to use the videoproxy.example.com cache peer for browsing video portals like Youtube, Netflix, Metacafe, and so on.

cache_peer_domain videoproxy.example.com .youtube.com .netflix.com
cache_peer_domain videoproxy.example.com .metacafe.com

These two lines will configure Squid to use the videoproxy.example.com cache peer for requests to the domains youtube.com, netflix.com, and metacafe.com only. Requests to other domains will not be forwarded using this peer.

Advanced control on access using peers

We just learned about cache_peer_domain, which provides a way to control access using cache peers. However, it’s not really flexible in granting or revoking access. That’s when cache_peer_access comes into the picture, which provides a very flexible way to control access using cache peers using ACLs. The syntax and implications are similar to other access directives such as http_access.

cache_peer_access CACHE_PEER_HOSTNAME allow|deny [!]ACL_NAME

Let’s write the following configuration lines, which will allow only the clients on the network 192.0.2.0/24 to use the cache peer acadproxy.example.com for accessing Youtube, Netflix, and Metacafe.

acl my_network src 192.0.2.0/24
acl video_sites dstdomain .youtube.com .netflix.com .metacafe.com
cache_peer_access acadproxy.example.com allow my_network video_sites
cache_peer_access acadproxy.example.com deny all

In the same way, we can use other ACL types to achieve better control over access to various websites using cache peers.

Caching web documents

All this time, we have been talking about the caching of web documents and how it helps in saving bandwidth and improving the end user experience, now it’s time to learn how and where Squid actually keeps these cached documents so that they can be served on demand. Squid uses main memory (RAM) and hard disks for storing or caching the web documents.

Caching is a complex process but Squid handles it beautifully and exposes the directives using squid.conf, so that we can control how much should be cached and what should be given the highest priority while caching. Let’s have a brief look at the caching-related directives provided by Squid.

Using main memory (RAM) for caching

The web documents cached in the main memory or RAM can be served very quickly as data read/write speeds of RAM are very high compared to hard disks with mechanical parts. However, as the amount of space available in RAM for caching is very low compared to the cache space available on hard disks, only very popular objects or the documents with a very high probability of being requested again, are stored in cache space available in RAM.

As the cache space in memory is precious, the documents are stored on a priority basis. Let’s have a look at the different types of objects which can be cached.

In-transit objects or current requests

These are the objects related to the current requests and they have the highest priority to be kept in the cache space in RAM. These objects must be kept in RAM and if there is a situation where the incoming request rate is quite high and we are about to overflow the cache space in RAM, Squid will try to keep the served part (the part which has already been sent to the client) on the disk to create free space in RAM.

Hot or popular objects

These objects or web documents are popular and are requested quite frequently compared to others. These are stored in the cache space left after storing the in-transit objects as these have a lower priority than in-transit objects. These objects are generally pushed to disk when there is a need to generate more in RAM cache space for storing the in-transit objects.

Negatively cached objects

Negatively cached objects are error messages which Squid has encountered while fetching a page or web document on behalf of a client. For example, if a request to a web page has resulted in a HTTP error 404 (page not found), and Squid receives a subsequent request for the same web page, then Squid will check if the response is still fresh and will return a reply from the cache itself. If there is a request for the same page after the negatively cached object corresponding to that page has expired, Squid will check again if the page is available.

Negatively cached objects have the same priority as hot or popular objects and they can be pushed to disk at any time in favor of in-transit objects.

Specifying cache space in RAM

So far we have learned about how the available cache space is utilized for storing or caching different types of objects with different priorities. Now, it’s time to learn about specifying the amount of RAM space we want to dedicate for caching. While deciding the RAM space for caching, we should be neither greedy nor paranoid. If we specify a large percentage of RAM for caching, the overall system performance will suffer as the system will start swapping processes in case there is no free RAM left for other processes. If we use a very low percentage of RAM for caching, then we’ll not be able to take full advantage of Squid’s caching mechanism. The default size of the memory cache is 256 MB.

Time for action – specifying space for memory caching

We can use extra RAM space available on a running system after sparing a chunk of memory that can be utilized by the running process under heavy load. To find out the amount of free RAM available on our system, we can use either the top or free command. To find out the free RAM in Megabytes, we can use the free command as follows:

$ free -m

For more details, please check the top(1) and free(1) man pages.

Now, let’s say we have 4 GB of total RAM on the server and all the processes are running comfortably in 1 GB of RAM space. After securing another 512 MB for emergency situations where running processes may take extra memory, we can safely allocate 2.5 GB of RAM for caching.

To specify the cache size in the main memory, we use the directive cache_mem. It has a very simple format. As we have learned before, we can specify the memory size in bytes, KB, MB, or GB. Let’s specify the cache memory size for the previous example:

cache_mem 2500 MB

The previous value specified with cache_mem is in Megabytes.

What just happened?

We learned about calculating the approximate space in the main memory, which can be used to cache web documents and therefore enhance the performance of the Squid server by a significant margin.

Have a go hero – calculating cache_mem for your machine

Note down the total RAM on your machine and calculate the approximate space in megabytes that you can allocate for memory caching.

Maximum object size in memory

As we have limited space in memory available for caching objects, we need to use the space in an optimized way. We should plan to set this a bit low, as setting it to a too larger size will mean that there will be a lesser number of cached objects in the memory and the HIT (being found in cache) rate will suffer significantly. The default maximum size used by Squid is 512 KB, but we can change it depending on our value for cache_mem. So, if we want to set it to 1 MB, as we have a lot of RAM available for caching (as in the previous example), we can use the maximum_object_size_in_memory directive as follows:

maximum_object_size_in_memory 1 MB

This command will set the allowed maximum object size in memory cache to 1 MB.

Memory cache mode

With the newer versions of Squid, we can control which objects we want to keep in the memory cache for optimizing the performance. Squid offers the directive memory_cache_mode to set the mode that Squid should use to utilize the space available in memory cache. There are three different modes available:

Mode Description
always The mode always is used to keep all the most recently fetched objects that can fit in the available space. This is the default mode used by Squid.
disk When the disk mode is set, only the objects which are already cached on a hard disk and have received a HIT (meaning they were requested subsequently after being cached), will be stored in the memory cache.
network Only the objects which have been fetched from the network (including neighbors) are kept in the memory cache, if the network mode is set.

Setting the mode is easy and can be set using the memory_cache_mode directive as shown:

memory_cache_mode always

This configuration line will set memory cache mode to always; this means that most recently fetched objects will be kept in the memory.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here