With that, this post will provide the reader with some insight of what HTTP compression is, why we need it and how it is implemented on the F5 LTM.
Sail Away...
HTTP is like a ship that carries cargo. The cargo is our data. HTTP ensures its cargo can be identified correctly, can be unpacked properly & can be moved quickly and efficiently. Sometimes, for the purposes of saving bandwidth, or increasing the speed of HTTP transactions, compression is performed by the web server. Not all data is a good candidate for compression, however. For example, data that is already compressed such as music files or video would not be suitable. Raw HTML & CSS would be suitable for compression.
Encoding
HTTP uses labelled entities to describe the meaning of the data and carry the content. A HTTP request & response message contains a number of headers, or entities. When a web client (e.g. a browser) makes a HTTP request to a web server it sends across a specific entity called Accept-Encoding in the header, amongst many others:This entity tells the web server which compression algorithms the client supports. There are a number of compression algorithms that are used:
- GNU Zip (GZIP): Described in RFC1952
- DEFLATE: Described in RFC1951
- Compress: Unix shell compression (Wikipedia)
- SDCH: Google compression algorithm (see White Paper)
Looking at the above image you can see that the client has informed the server it supports three algorithms. If a client does not indicate which algorithm it supports the server will assume it can support any, or the equivalent of Accept-Encoding: *. In fact taking this one step further, the Accept-Encoding entity can take the following forms:
Accept-Encoding: gzip, deflate (Both algorithms equally preferred)
Accept-Encoding: * (Accept any algorithm)
Accept-Encoding: gzip;q=0.5, deflate;q=0.8 (Deflate preferred, followed by GZIP)
Accept-Encoding: gzip;q=0.5, deflate;q=0.2, *;q=0 (GZIP preferred, followed by deflate, nothing else accepted)
The HTTP/1.1 specification defines an optional parameter called the quality value. This value indicates to the server the relative importance ("weight") of this parameter, where 0 is the minimum (least preferred) and 1 the maximum value (most preferred). If a parameter has a quality value of 0, then content with this parameter is `not acceptable' for the client. In the above example, since both algorithms do have a value explicitly defined they default to 1 so are equally preferred by the client.
The question now is, which one should the server choose when compressing the response?
When the web server generates the response message it encodes the message. An encoded message has the same Content-Type as the request. This is required to describe the original format so the client can properly display it once it has been decoded. The server also adds a Content-Encoding header indicating the algorithm used to compress the data. This is required to help the client understand how to decode or decompress the contents. The Content-Length header now represents the length of the encoded (compressed) body, not the original size.
Putting this altogether looks something like this:
F5 HTTP Compression
If there is one thing that can be said about F5 it's that they love a profile.The BIG-IP system allows you to offload the HTTP compression task from the back-end web server through the use of a HTTP Compression Profile, which then needs to be applied to a virtual server. The gist of it works such that the system reads the Accept-Encoding header of the incoming client request to determine the preferred encoding method. The system then removes the Accept-Encoding header before sending it to the pool member. When the system receives the response from the pool member it compresses the data using the algorithm as set in the profile and re-inserts the Content-Encoding header.
Due to the number of settings available, the HTTP Compression Profile can be a little daunting at first, as is the case with many of the LTMs profiles. F5 do a good job of explaining what each of the settings actually do, all I would do is simply repeat it. I'd recommend heading over to this page to find out.
Some Checks
When considering HTTP compression for your network it is also prudent to first check your BIG-IP system's capability. First off check what limits your license allows for using the following command:show /sys license detail | grep perf_http_compression_Mbps
In my lab I have an unlimited license, however, on our production boxes there is a limit:
Lab:
# show /sys license detail | grep perf_http_compression_Mbps
perf_http_compression_Mbps [unlimited]
Production:
# show /sys license detail | grep perf_http_compression_Mbps
perf_http_compression_Mbps [100]
Secondly, check to see if your system supports hardware compression. This will obviously help to increase performance. The BIG-IP system does allow you to set different 'compression strategies'. This CLI only command allows you to define if you want the system to use hardware or software resources when doing compression. See this Knowledge Base article. The absence of any output from the command below tells you the system does not support hardware compression. On our production system, however, we do:
# show /sys license detail | grep "HTTP Hardware Compression"
HTTP Hardware Compression
The Test
For the purposes of this demonstration I'll be using the following setup:
To ensure the LTM is actually doing something I'll perform a transaction without the HTTP compression profile applied. Here we can see that the total length of the uuencoded body is 22,215 bytes:
ltm profile http-compression HTTP-COMPRESSION {
allow-http-10 disabled
app-service none
browser-workarounds disabled
buffer-size 4096
content-type-exclude none
content-type-include { text/html }
cpu-saver enabled
cpu-saver-high 90
cpu-saver-low 75
defaults-from httpcompression
gzip-level 6
gzip-memory-level 8k
gzip-window-size 16k
keep-accept-encoding disabled
method-prefer gzip
min-size 1024
uri-exclude none
uri-include { index.php }
vary-header enabled
Note the underlined and italic commands - these are where I have applied the following custom settings to the profile, which has automatically inherited all other settings from its parent profile ('httpcompression'):
- content-type-include { text/html } - only compress the body HTML content
- gzip-level 6 - set the degree to which the system compress the content where three values are possible: 1 (least compression but fastest), 6 (optimal in terms of compression & speed), 9 (most compression but slowest)
- uri-include { index.php } - only compress content if it is from this URI
The moment of truth...
You'll need to play around with the settings on your network to achieve optimal settings for your requirements. For example, does your network still use HTTP/1.0? If so, you would need to set the allow-http-10 setting to enabled. Do you need to modify the data compression strategy to make better use of hardware compression? If so, then this must be set at the command line. What type of content do you actually need compressing? My simple example demonstrates HTML compression, however, you can be more granular and even specify Linux regular expressions. Some examples of other types of content compression include:
- application/(xml|javascript|json) - Compress XML, JavaScript or JSON content
- .*\.pdf - Compress PDF files
- text/css - Compress CSS files
The above are examples of MIME types. Browsers often use the MIME type to determine what default action to do when a resource is fetched
Summary
HTTP compression can clearly provide significant benefits to your web application's performance. The F5 HTTP Compression feature helps to alleviate pressure from your backend servers and improve performance further. In saying that, like all features of any system, turning them on just because they are there is not how to do things. Approach this with a healthy degree of caution and understanding and always test out before deploying into production.Thank for reading.
No comments:
Post a Comment