What?
This blog post is about speeding up the delivery of content from Alfresco.The example I'll discuss here might not make any noticeable difference to the end user, more that it will free up resource on the Alfresco server so it can get on with the job of delivering information.This is done using a cache in front of the application server running Alfresco.Background:
Running Alfresco in the Cloud has meant we've had to invest in monitoring solutions for our application that we wouldn't normally have needed for our internal instances - in this case one of the tools we are using is called AppDynamics.One of the immediate things it showed was that a high percentage of all calls to Alfresco were for static assets - javascript files, css files, images etc that are used to build the static parts of the viewed page.By offloading these to a caching layer, the Alfresco application can just concentrate on serving the dynamic content to the user - hopefully faster too What to use:
After researching the different tools to use to cache the static assets, I opted for Varnish. https://www.varnish-cache.org/Varnish says this about itself on its website:'Varnish Cache is a web accelerator, sometimes referred to as a HTTP accelerator or a reverse HTTP proxy, that will significantly enhance your web performance and web content delivery. Varnish Cache speeds up a website by storing a copy of the page served by the web server the first time a user visits that page. The next time a user requests the same page, Varnish will serve the copy instead of requesting the page from the web server. This means that your web server needs to handle less traffic and your website’s performance and scalability go through the roof. In fact Varnish Cache is often the single most critical piece of software in a web based business. 'To integrate this into our service, I hooked it into the existing HAProxy configuration which you can read about here: https://www.alfresco.com/blogs/devops/2014/07/16/haproxy-for-alfresco-updated-fror-haproxy-1-5/The interaction of these two services can be visualised as:The reason I integrated the two this way was that the HAProxy service already has the knowledge of the various proxy routes needed to run our service so it was sensible to keep this knowledge and not duplicate. The end result is that the configuration for Varnish is really simple.Below is the Varnish config - it listens on 127.0.0.1 so can't be accessed inappropriately from the network.It caches all static assets, and also doclib thumbnails. It strips cookies off these to ensure that all users share one cache for best performance.It has a health check return result so HAProxy can monitor the health of the cache and bypass it if the Varnish service has any issues.It also removes some of the standard headers that Varnish sets - to remove the risk of an information disclosure vulnerability (see https://www.owasp.org/index.php/Information_Leakage), and then sets a custom header that can be used to determine a cache hit or miss.Config:#
# varnish config
# caches all static files (images, js, css, txt, flash)
# but requests from backend dynamic content.
# Note, only static asset urls should end up here anyway.
#
backend default {
.host = '127.0.0.1';
.port = '8000';
.first_byte_timeout = 300s;
}
# what files to cache
sub vcl_recv {
#Health Checking
if (req.url == '/varnishcheck') {
error 751 'health check OK!';
}
# grace period (stale content delivery while revalidating)
set req.grace = 5s;
# Accept-Encoding header clean-up
if (req.http.Accept-Encoding) {
#use gzip when possible, otherwise use deflate
if (req.http.Accept-Encoding ~ 'gzip') {
set req.http.Accept-Encoding = 'gzip';
} elsif (req.http.Accept-Encoding ~ 'deflate') {
set req.http.Accept-Encoding = 'deflate';
} else {
# unknown algorithm, remove accept-encoding header
unset req.http.Accept-Encoding;
}
# Microsoft Internet Explorer 6 is well know to be buggy with compression and css / js
if (req.url ~ '\.(css|js)' && req.http.User-Agent ~ 'MSIE 6') {
remove req.http.Accept-Encoding;
}
}
#Cache all the cachable stuff!
return(lookup);
}
# strip the cookie before the image is inserted into cache
sub vcl_fetch {
if (req.url ~ '\.(png|gif|jpg|swf|css|js)$') {
unset beresp.http.set-cookie;
}
if (req.url ~ '/content/thumbnails/') {
unset beresp.http.set-cookie;
}
if (beresp.http.content-type ~ '(text|application)') {
set beresp.do_gzip = true;
}
if (beresp.status == 404) {
set beresp.ttl = 0s;
return (hit_for_pass);
}
return (deliver);
}
# add response header to see if document was cached
sub vcl_deliver {
unset resp.http.via;
unset resp.http.x-varnish;
if (obj.hits > 0) {
set resp.http.V-Cache = 'HIT';
} else {
set resp.http.V-Cache = 'MISS';
}
}
sub vcl_error {
# Health check
if (obj.status == 751) {
set obj.status = 200;
return (deliver);
}
}
To be able to use Varnish, we modified our HAProxy configuration to include a new route for static assets to pass through to Varnish:## Add a new Frontend for Varnish to connect to
## All this does is send traffic to the share backend
# Front end for Varnish connections
frontend httpvarnish
bind 127.0.0.1:8000
acl is_share path_reg ^/share
use_backend share if is_share
## This bit needs to go in the main Frontend, serving port 443 for example.
# acl to match on static asset paths, or content types
acl static_assets path_reg ^/share/-default-/res/.*
acl static_assets path_end .gif .png .jpg .css .js .swf
acl static_assets path_reg /content/thumbnails/.*
#Varnish service check
acl varnish_available nbsrv(varnish_cache) ge 1
##Route traffic to Varnish if the Varnish service check has returned positive, and we are serving a static asset
#Make sure this is the first use_backend in the list
use_backend varnish_cache if static_assets varnish_available
## Backend for connecting to varnish
backend varnish_cache
option redispatch
cookie JSESSIONID
# Varnish must tell it's ready to accept traffic
option httpchk HEAD /varnishcheck
http-check expect status 200
# client IP information
option forwardfor
server varnish-1 localhost:6081 cookie share1 check inter 2000
Once this configuration is up and running, when you access your service if you take a look at the response headers using your browsers developer tools you should see a header like this:v-cache:HIT
This shows that the asset is now being served from Varnish and hasn't had to be served by Alfresco.The last three days information for one of our 3 web nodes shows: 660140 0.00 2.24 client_req - Client requests received
441094 0.00 1.49 cache_hit - Cache hits
So, if all our web nodes served that many cache hits over that time period we've served ~18,000 cache hits per hour (or ~300 per minute) over those last 3 days. That's quite a lot of load shifted away from the Share/Alfresco service.Extras:
There are a load of commands that come with Varnish that can be used to monitor the cache.Here are a few of them - the best place to read more about them is the Varnish website listed above.Check varnish config:
- varnishd -C -f /etc/varnish/default.vcl
- varnishlog -b
- varnishlog -c
- varnishtop -i txurl #will show you what your backend is being asked the most
- varnishtop -i rxurl #will show you what URLs are being asked for by the client
- varnishtop -i rxurl -i RxHeader
- varnishtop -i RxHeader -I Accept-Encoding #will show the most popular Accept-Encoding header the client are sending you
- varnishtop -i rxurl -i txurl -i TxStatus -i TxResponse
- varnishhist #utility reads varnishd(1) shared memory logs and presents a continuously updated histogram showing the distribution of the last N requests by their processing.
- varnishsizes #does the same as varnishhist, except it shows the size of the objects and not the time take to complete the request.
- varnishstat #varnish has lots of counters. We count misses, hits, information about the storage, threads created, deleted objects. Just about everything - this command will dump these counters.