Getting started with Varnish cache

What is Varnish

Varnish cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

The keyword here is reverse proxy. The Varnish cache is hit before your HTTP server and you can optimize the requests to your application server as you wish.

How to install

Simply by following the installation guide on their webpage, section Quick install guides various operating systems: https://www.varnish-cache.org/docs

How to start

service varnish start will start the Varnish cache for you. Navigate to http://127.0.0.1:6081/ (replace with your hostname) to open the default varnish installation.

Setting up (part 1)

The default configuration can be found here: /etc/varnish/default.vcl

Comments are good to read so have in mind that if using systemd you will need to cp /lib/systemd/system/varnish.service /etc/systemd/system/ and edit the new file. Otherwise copy the file as described below.

As you might know better copy the file and edit the copy than the original file directly. :-)
cp /etc/varnish/default.vcl /etc/varnish/user.vcl Then open the new file /etc/varnish/user.vcl.
The .vcl extension means Varnish Configuration Language

You will find several blocks in the user.vcl file. Let's start with the backend default section:
backend default { .host = "127.0.0.1"; .port = "8080"; }

This is where your original website will be accessible.
Most probably your website is accessible locally so let the 127.0.0.1 be your host. You may want to change the port to something else.
Have in mind that if you change to port to one that is already in use you should stop the application which is currently using this port.

Let's say your website runs on port 5000.

Using Varnish we will put the Varnish cache listen on port 5000 and then proxying all requests to port 6000 and change your website's listening port to 6000.

What we do in this section is telling Varnish to proxy all requests to the new website listening port. In our case we should change the port to 6000.

The backend section should look like this:

backend default { .host = "127.0.0.1"; .port = "6000"; }

Setting up (part 2)

Now that we have configured Varnish to proxy the requests to our web server we need to configure it to accept requests on the desired port.

For Debian and Ubuntu the file is /etc/default/varnish while for Red Hat and CentOS it is /etc/sysconfig/varnish.
Open the file and here you will find

DAEMON_OPTS="-a :6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m"

More than obvious these are the options which our Varnish server will use when starting.

First change the default.vcl with user.vcl which is our new configuration file we created earlier.

Second the -a option is the port Varnish will accept requests. In most of the cases we would want to accept HTTP requests on port 80 but in our case we said our website runs on port 5000 so we should change the port from the default 6081 to 5000. Change to:

DAEMON_OPTS="-a :5000 \ -T localhost:6082 \ -f /etc/varnish/user.vcl \ -S /etc/varnish/secret \ -s malloc,256m"

Lastly the malloc,256m is again close to mind how much MB Varnish uses for caching.

By doing these two steps we say:

  • Listen on port 5000
  • When a request is received proxy it to port 6000 – the port from Setting up (part 1)

Settings up (part 3)

Don't forget to modify your application to listen to port 6000.

For example if you are using PHP and apache with virtual hosts you should delete the port from ports.conf, change the virtual host to listen to the desired port and restart apache.

Restart

After all this is ready first restart your application to close its old port and start using the new one and then restart the varnish service: service varnish restart.

Final touches

You may want to add a caching period.

This can be done in the /etc/varnish/user.vcl file. If this is not existent for you, add it:
sub vcl_backend_response { set beresp.ttl = 30d; }

Set the caching to as much time as you want and restart the service once again.

If the malloc,256m is too much for you try using malloc,64m.

Check if it's working

Of course curl the host and port and check if the page opens up.

You can also check the headers:
Age:105 Via:1.1 varnish-v4

These two headers are saying the page was put in cache X seconds ago and was server by Varnish.

Last but not least – the fun part: PERFORMANCE

Let's compare the same website:

  • using apache + PHP and memcached
  • using Varnish cache

The testing tool is apache ab run with 100 requests and concurrency of 10. The results are:

For apache + PHP and memcached:

mean [ms]

Connect: 2ms

Processing: 2167ms

Waiting: 2158ms

Total: 2169ms

For Varnish cache:

mean [ms]

Connect: 1ms

Processing: 1ms

Waiting: 1ms

Total: 3ms (723 times less)

Obviously 10 / 10 000 feels like nothing for Varnish.

Let's try with something a little more interesting:

  • 100 concurrency
  • 100 000 requests

Results:

mean [ms]

Connect: 9ms

Processing: 11ms

Waiting: 7ms

Total: 20ms

Looks good! :-)