acmeextension

Varnish Cache Advanced Backend Configuration

By: Sunil Kumar |  In: Server  |  Last Updated: 2018/07/26

NOTE- This guide is for the guys who have working Varnish server and have basic knowledge of it. If you are new to Varnish server it will be better if you learn Basics of Varnish server.

Varnish is a very popular software package that can dramatically accelerate the work of serving HTTP pages. Varnish caches fully-rendered responses to HTTP requests and serves them without the delay of building content from scratch.

Varnish usually has three locations of configuration. The boot script, the system-wide configuration, and the VCL file that does most of the work.

The first script that starts up Varnish is usually located with the rest of your system startup scripts at /etc/init.d/varnish. This file rarely needs adjustments.

The second file is usually located at /etc/sysconfig/varnish (on CentOS and RedHat machines) or /etc/default/varnish (on Ubuntu).

This file defines global configuration for Varnish such as which port it should run on and where it should store its cache.

Here’s a configuration that we are using the “Option 2” in the /etc/default/varnish file):

1. Enable leverage browser caching in Varnish

To enable browser caching for media files, your vcl_backend_response should match the following configuration.

sub vcl_backend_response {
    if (bereq.url ~ "\.(png|gif|jpg|swf|css|js)$") {
    unset beresp.http.set-cookie;
    set beresp.http.cache-control = "max-age = 2592000";
  }

2. Purge(clear) Varnish Cache

To clear the Varnish’s cache, you can change vcl_recv to match the following configuration:

After making this change, you can send a curl request in your ssh session with the following format:

curl <domain_name.com> -XPURGE

Here, -XPURGE will send the purge request to the Varnish server.

  sub vcl_recv {
       if (req.method == "PURGE") {
              return (purge);
       }
   }

After making this change, you can send a curl request in your ssh session with the following format:

curl <domain_name.com> -XPURGE

Here, -XPURGE will send the purge request to the Varnish server.

3. Defining different backend

When we are working on the big business application It’s quite possible that we need a different backend. Let’s say we need a different backend for the URLs which start with “/application/”

We manage to get the thing up and running on port 8000. Now, let’s have a look a default.vcl.:

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

We add a new backend.:

backend application {
    .host = "127.0.0.1";
    .port = "8000";
}

Now we need tell where to send the difference URL. Let’s look at vcl_recv.:

sub vcl_recv {
    if (req.url ~ "^/application/") {
        set req.backend = application;
    } else {
        set req.backend = default.
    }
}

It’s quite simple, really. Let’s stop and think about this for a moment. As you can see you can define how you choose backends based on really arbitrary data. Do you want to send mobile devices to a different backend(Imagine to the application backend)?

No problem.

if (req.User-agent ~ /mobile/) …. should do the trick.

sub vcl_recv {
    if (req.User-agent ~ /mobile/) {
        set req.backend = application;
    } else {
        set req.backend = default.
    }
}

4. Grouping Backends(Directors) in Varnish

You can also group several backends into a group of backends. These groups are called directors.

This will give you increased performance and resilience. You can define several backends and group them together in a director.:

 backend server1 {
      .host = "192.168.0.10";
  }

  backend server2{
      .host = "192.168.0.10";
  }

Now we create the director.:

  director director_name round-robin {
    {
            .backend = server1;
    }
    # server2
    {
            .backend = server2;
    }
    # foo
}

This director is a round-robin director. This means the director will distribute the incoming requests on a round-robin basis. There is also a random director which distributes requests in a, you guessed it, random fashion.

5. Health checks

What if one of your servers goes down? Can Varnish direct all the requests to the healthy server? Sure it can. This is where the Health Checks come into play.

Lets set up a director with two backends and health checks.

First lets define the backends.:

 
backend server1 {
   .host = "server1.example.com";
   .probe = {
   .url = "/";
   .interval = 5s;
   .timeout = 1 s;
   .window = 5;
   .threshold = 3;
  }
}
backend server2 {
   .host = "server2.example.com";
   .probe = {
   .url = "/";
   .interval = 5s;
   .timeout = 1 s;
   .window = 5;
   .threshold = 3;
  }
}

Whats new here is the probe. Varnish will check the health of each backend with a probe. The options are

  • URL: What URL should varnish request.
  • Interval: How often should we poll
  • timeout: What is the timeout of the probe
  • window: Varnish will maintain a sliding window of the results. Here the window has five checks.
  • Threshold: How many of the .window last polls must be good for the backend to be declared healthy.
  • Initial: How many of the of the probes a good when Varnish starts – defaults to the same amount as the threshold.

Now we define the director.:

  director example_director round-robin {
      {
         .backend = server1;
      }
      # server2
      {
         .backend = server2;
      }

    }

6. Misbehaving servers(Caching Even if Apache Goes Down)

A key feature of Varnish is its ability to shield you from misbehaving web- and application servers.

Sometimes it’s possible for the entire site to go “DOWN” due to any number of causes. A programming error, a database connection failure, or just plain excessive amounts of traffic.

In such scenarios, the most likely outcome is that Apache will be overloaded and begin rejecting requests. In those situations, Varnish can save your bacon with the Grace period.

6.1 Grace mode

When several clients are requesting the same page Varnish will send one request to the backend and place the others on hold while fetching one copy from the back end.

If you are serving thousands of hits per second this queue can get huge. Nobody likes to wait so there is a possibility to serve stale content to waiting users.

Or when our apache server is down for any reason and we want Varnish to serve the content. Apache gives Varnish an expiration date for each piece of content it serves. Varnish automatically discards outdated content and retrieves a fresh copy when it hits the expiration time. However, if the web server is down it’s impossible to retrieve the fresh copy.

In order to do this, we must instruct Varnish to keep the objects in cache beyond their TTL. So, to keep all objects for 30 minutes beyond their TTL use the following VCL:

  sub vcl_fetch {
    # Allow items to be stale if needed.
      set beresp.grace = 30m;
    }

Varnish still won?t serve the stale objects. In order to enable Varnish to actually serve the stale object, we must enable this on the request. Lets us say that we accept serving 15s old object.:

  sub vcl_recv {
      # Allow the backend to serve up stale content if it is responding slowly.
    set req.grace = 15s;
  }

Both of these settings can be the same, but the setting in vcl_fetch must be longer than the setting in vcl_recv.

Think of the vcl_fetch grace setting as “the maximum time Varnish should keep an object”. The setting in vcl_recv, on the other hand, defines when Varnish should use a stale object if it has one.

You might wonder why we should keep the objects in the cache for 30 minutes if we are unable to serve them? Well, if you have enabled Health checks you can check if the backend is healthy and serve the content for longer.:

  if (! req.backend.healthy) {
       set req.grace = 5m;
    } else {
       set req.grace = 15s;
    }
 

6.2 Saint mode

Sometimes servers get flaky. They start throwing out random errors. You can instruct Varnish to try to handle this in a more-than-graceful way – enter Saint mode.

Saint mode enables you to discard a certain page from one backend server and either try another server or serve stale content from the cache. Let’s have a look at how this can be enabled in VCL:

  sub vcl_fetch {
      if (beresp.status == 500) {
        set beresp.saintmode = 10s;
        restart;
      }
      set beresp.grace = 5m;
    }

When we set beresp.saintmode to 10 seconds Varnish will not ask that server for URL for 10 seconds. A blacklist, more or less. Also, a restart is performed so if you have other backends capable of serving that content Varnish will try those. When you are out of backends Varnish will serve the content from its stale cache.

This can really be a lifesaver.

7. Making Varnish Pass to Apache for Uncached Content

Often when configuring Varnish to work with an application, sometimes we have some pages that should absolutely never be cached. In those scenarios, you can easily tell Varnish to not cache those URLs by returning a “pass” statement.

   # Telling Varnish not to cache these paths.
    if (req.url ~ "^/state\.php$" ||
        req.url ~ "^/india/ping$" ||
        req.url ~ "^/admin/access" ||
        req.url ~ "^/info/.*$" ||
        req.url ~ "^/terms/.*$" ||
        req.url ~ "^.*/ajax/.*$"{
         return (pass);
    }

Varnish will still act as an intermediary between requests from the outside world and your web server, but the “pass” command ensures that it will always retrieve a fresh copy of the page.

Or let’s say we don’t want to cache mobile devices

if (req.User-agent ~ /mobile/) { 
   return (pass);
    }

Conclusion

Varnish is an amazing and incredibly efficient tool for serving up common resources from your site to end-users. Besides simply making your site faster, it also can add additional redundancy to your setup by acting as a full backup if the web servers fail.

Comments


  • Leave a Comment

    Your email address will not be published.

    *


    Sunil Kumar


    I am the owner of acmeextension. I am a passionate writter and reader. I like writting technical stuff and simplifying complex stuff.
    Know More

    Subscribe to our mailing list


    %d bloggers like this: