Friday 21 May 2021

Logging

 Local Logging:-

 • Log messages provides regular basis of the events that are happening on the system 

• Standard UNIX logging using syslog-ng 

• Local Syslog files stores in /var/log/ directory

 • Uses facility levels to describe system/module messages

 Remote Logging :-

• Send messages to external tool sch as Syslog Server, Splunk or ArcSight 

• Syslog – Legacy remote logging listening to UDP 514 

• High Speed Logging (HSL) – publish log messages to destination using filtering criteria 

Configuration:-

 • System ► Logging 

• Local and remote config file is /etc/syslog-ng/syslog-ng.conf





Simple Configuration File (SCF):-


 • flat, text file that contains a series of TMSH commands

 • Use to easily replicate the configuration

 • Only available in TMSH (No GUI)

 • Contains configuration from bigip.conf, bigip_base.conf, bigip_user.conf, bigip_script.conf

 • tmsh save sys config file [filename]


BIG-IP factory default configuration

 • tmsh load sys config default 

• retains the management IP and the assigned root and administrator passwords

 • /defaults/defaults.scf – loads factory default settings



About UCS

 User Configuration Set (UCS) :-

• Supports Compression and Encryption 

• Option to include or exclude Private Keys

 • Stores in /var/local/ucs by default 

• Stores securely off-box 


UCS archive files contents:- 

• All BIG-IP-specific configuration files

 • BIG-IP product licenses 

• User accounts and password information

 • Domain Name System (DNS) zone files and the ZoneRunner configuration

• Secure Socket Layer (SSL) certificates and keys

BIG-IP GUI

 Access Options:

 • HTTPS to Management IP or Self IP 

• can be restricted using Port Lockdown, user or packet filter

 • Accessible by admin user by default BIG-IP GUI 


Left Pane / Side Bar :

• Modules, Submodules, Components

 • Some Modules only be available when activated 

• Tabs – Main, Help and About 


Main Pane / Central Pane:

 • Create and Edit Configuration 

• Enable Features and Modules 

• Monitoring Results

 • Tabs are available to jump to other components

Management Interface:--

 • Default IP 192.168.1.245/24 for BIG-IP Hardware 

• Default IP 192.168.1.246/24 for VIPRION 

• DCHP Client for Virtual Edition

 • Accessible via HTTPS and SSH 

• Default HTTPS credentials – admin/admin 

• Default SSH credentials – root/default

• Filtering Options

About TMSH:--

 TMSH (TMOS Shell) Hierarchical Structure 

• Root ► modules ► sub-modules or components 

• Modules – net, sys, ltm

 • Sub-modules – monitor, profile 


TMSH common commands

 • show 

• list 

• create 

• modify

 • delete

 • save 

• exit 

• quit

BIG-IP Stored Configuration Files:-


• /config/bigip.conf - Virtual Servers, Pools, SNATs, Monitors, Profiles etc

 • /config/bigip_base.conf - VLANs, Interfaces, Self IPs, Device Groups etc 

• /config/BigDB.dat - System settings, Hostname, HA settings etc 

• /config/bigip_user,conf – User account configuratin

 • /config/profile_base.conf – system-defined profile object

Common services running on BIG-IP

 


BIG-IP Troubleshooting: -

 1.From the GUI, it seems that your BIG-IP is not saving any more log messages. How and what do you need to verify?

2.You are unable to access the BIG-IP GUI but still able to SSH. You need to download snapshot instance for a specific F5 support case.?

3. You found out that a single Server is over-utilized. Upon checking Network map, it shows all 3 members are online and round robin load balancing is configured. What could be the problem.?

4.You are attempting to delete http_pool but always unsuccessful. What is the possible reason?

5. BIG-IP unable to reach ihealth.f5.com to upload qkview file automatically. What troubleshooting steps you need to do?

 

Common Issues and Best Practices

  Use the right Health Monitors

  Use Health Monitors to node default and to pools only

  Always check Network Map

  Troubleshoot from CLI – ping, tcpdump, df, bigtop, dig

  Use iHealth for Diagnosis

  Open F5 Support case if needed

 

HTTP Troubleshooting:

 1.Client is unable to access web application. Pool Member 1 is mark Available (Green Circle)

2. Client is unable to access web application. It is confirmed that Virtual Server, Pool Members and Nodes are mark available (Green Circle)

3. Clients can access the web application successfully via BIG-IP. The BIG-IP is receiving frequent request to a site that contains large amount of static content such as CSS files, images, and javascripts. Optimization is needed as number of clients continuously increase. What do you need to enable.

 

HTTP Caching / Web Acceleration

  is a collection of HTTP objects stored in the BIG-IP system's memory

  subsequent connections can reuse to reduce traffic load on the origin web servers

. reduce the need to send frequent requests for the same object and eliminate the need to send full responses

  Cache Content Types: 200, 203, 303, 301, 401 HTTP Responses, CSS files, JavaScript, images HTTP Troubleshooting HTTP Compression aka Content Encoding

Client sends HTTP Request, BIG-IP reads the ACCEPT-ENCODING and removes its header and passes to the Server

Server receives the REQUEST and force not to compress HTTP Request Body

. BIG-IP receives Server response and inserts CONTENT-ENCODING header (gzip or deflate) and sends compressed data back to the client.

  Compress and reduce the size of HTTP REQUEST Body using available method from Client to BIG-IP to Client.

 Note: Both are profiles configured under Acceleration Module and required to enable HTTP Profile on Virtual Server

 

HTTP Status Codes

  a three-digit integer seen in Response header

  identifies the general category of response on 1st digit

 

HTTP/1.0 200 OK             HTTP/1.0 404 Not Found

 

General Category of Responses:

1XX indicates an informational message only

  2XX indicates success

  3XX redirects the client to another URL

  4XX indicates an error on the client’s part

5XX indicates an error on the server’s part

 

4. HTTP 302 Redirect

5. HTTP 404 Not Found

HTTP 401 Unauthorized

HTTP 502 Bad Gateway

 

Common Layer 2 Issues and Best Practices :-

 1.BIG-IP unable to reach all three Nodes – 172.16.10.1, 172.16.20.2 and 172.16.30.3. All nodes are in a different VLAN. Physical connection from the BIG-IP to the switch is working properly.

2. BIG-IP able to reach all Nodes except 172.16.20.2. All nodes are in a different VLAN. Physical connection from the BIG-IP to the switch is working properly.

3. You verified that PC and Server MAC address both has entry in the MAC Table. But you don’t see both entries in the ARP Table. What causes this is issue?

4.You just recently setup a BIG-IP Active/Standby pair. What feature do you need to enable to optimize BIG-IP High Availability?

5.You verified that PC and Server MAC address both has entry in the MAC Table. But you don’t see both entries in the ARP Table. What causes this is issue?

Common Layer 2 Issues and Best Practices :-

VLAN misconfiguration

802.1Q/Tagging misconfiguration

Verify ARP Resolution on BIG-IP and neighboring device

  Verify Interface Status and configuration

Enable MAC Masquerading in a HA pair

  Documentation

Layer 1 connectivity Issue:--

 

1.Failover is not working from the BIG-IP A/S standby pair. Based from the output of show net interface from tmsh. What seems to be the problem.?

2. You experience connection lost and decided to run show interface from tmsh net. You verified that switch is up and running?

3. Clients are experiencing latency when connecting to HTTP Application. What may cause the latency from below output?

4. You experience slowness when PCs and Servers communicates. Upon checking the status of interfaces e1 and e4, you see number of collisions continuously incrementing. What may cause these collisions?

Common Layer 1 Issues

Cable Specifications

Bad Cable

  Incorrect media (SFP+, Optics)

  Speed Settings

Duplex Settings

Connected device Unavailable

F5 HA KEY CONCEPTS

High availability
(HA) makes sure that the server pool is ready for user requests in situations when your primary load balancer is down and you can redirect that traffic to your backup/secondary load balancer with very minimal downtime which is not noticeable by users.

Redundant devices
A redundant system is a type of BIG-IP system configuration that allows traffic processing to continue if a BIG-IP system becomes unavailable. A BIG-IP redundant system consists of two identically configured BIG-IP units. When an event occurs that prevents one of the BIG-IP units from processing network traffic, the peer unit in the redundant system immediately begins processing that traffic, and users experience no interruption in service.

Failover
Failover is a process that occurs when one system in a redundant system becomes unavailable, thereby causing the peer unit to assume the processing of traffic originally targeted for the unavailable unit. An essential element to making failover successful is a feature called ConfigSync which is a process where you replicate one unit’s main configuration file on the peer unit.

Device Trust domains
To provide failover or configuration sync, BIG-IP systems on the network must be in the same trust domain. The trust relationship between BIG-IP devices on the network is established through certificate-based authentication. BIG-IP devices in a trust domain can synchronize and failover their BIG-IP configuration data, and exchange status messages continuously.

Device groups
A device group is a collection of BIG-IP systems that have established a device trust and share data with each other. There are two device groups types:
sync-only and sync-failover.
A sync-only device group synchronizes only configuration data, such as policy data, but it does not synchronize failover objects. 
A sync-failover device group synchronizes configuration data and traffic group data for failover purposes. Use this configuration to fully synchronize two BIG-IP systems.

Traffic groups
A traffic group is a collection of related configuration objects that run on a BIG-IP system. Together, these objects process a particular type of traffic


HTTP response status codes

 HTTP response status codes indicate whether or not a specific HTTP request has been successfully completed. The Status-Code element in a server response, is a 3-digit integer where the first digit of the Status-Code defines the class of response. Responses are classified into five classes:









SSL Termination

 In this article, it says that SSL termination and SSL offloading are the same terms.


Is SSL Termination the same as SSL Bridging or SSL Offloading? 

SSL termination refers to the process of decrypting encrypted traffic before passing it along to a web server.

What is SSL Termination?

Approximately 90% of web pages are now encrypted with the SSL (Secure Sockets Layer) protocol and its modern, more secure replacement TLS (Transport Layer Security). This is a positive development in terms of security because it prevents attackers from stealing or tampering with data exchanged between a web browser and a web or application server. But, decrypting all that encrypted traffic takes a lot of computational power—and the more encrypted pages your server needs to decrypt, the larger the burden.

SSL termination (or SSL Offloading) is the process of decrypting this encrypted traffic. Instead of relying upon the web server to do this computationally intensive work, you can use SSL termination to reduce the load on your servers, speed up the process, and allow the web server to focus on its core responsibility of delivering web content.

Why is SSL Termination Important?

Many security inspection devices have trouble scaling to handle the tidal wave of malicious traffic, much less decrypting, inspecting, and then re-encrypting it again. Using an ADC or dedicated SSL termination device to decrypt encrypted traffic ensures that your security devices can focus on the work they were built to do.

In addition, by using SSL termination, you can empower your web or app servers to manage many connections at one time, while simplifying complexity and eliminating performance degradation. SSL termination is particularly useful when used with clusters of SSL VPNs, because it greatly increases the number of connections a cluster can handle.

Offloading SSL or TLS traffic to an ADC or dedicated device enables you to boost the performance of your web applications while ensuring that encrypted traffic remains secure.

How Does SSL Termination Work?

SSL termination works by intercepting the encrypted traffic before it hits your servers, then decrypting and analyzing that traffic on an Application Delivery Controller (ADC) or dedicated SSL termination device instead of the app server. An ADC is much better equipped to handle the demanding task of decrypting multiple SSL connections, leaving the server free to work on application processing.

How Does F5 handle SSL Termination?

BIG-IP Local Traffic Manager (available in hardware or software) offers efficient and easy-to-implement SSL termination/offload that relieves web servers of the processing burden of decrypting and re-encrypting traffic while improving application performance.

Alternatively, SSL Orchestrator delivers dynamic service chaining and policy-based traffic steering, applying context-based intelligence to encrypted traffic handling to allow you to intelligently manage the flow of encrypted traffic across your entire security chain, ensuring optimal availability.


Most Common SSL Methods for LTM: SSL Offload, SSL Pass-Through and Full SSL Proxy

 Hi friends, I just want to make sure I am clear on the concepts. I was studying a bit and I find myself with doubt because in one article it says one thing and in another, it says the opposite.  


In the first article, it says that SSL Bridging and SSL Terminations are the same terms. 

Description

BIG-IP is built to handle SSL traffic in load balancing scenario and meet most of the security requirements effectively. The 3 common SSL configurations that can be set up on LTM device are:

  • SSL Offloading
  • SSL Passthrough
  • Full SSL Proxy / SSL Re-Encryption / SSL Bridging / SSL Terminations

Environment

  • Configuration objects and settings: Virtual Server, Client SSL and Server SSL profiles
  • BIG-IP, LTM

Additional Information

Typical load balancing infrastructure setup would be Client--->BIG-IP VIP ---->Servers hosting applications i.e. client traffic will be directed to a load balancer like BIG-IP which in return (using complex algorithm) send the traffic to an appropriate server.

SSL Offloading - In this method the client traffic to BIG-IP is sent as encrypted. Instead of the server decrypting and re-encrypting the traffic BIG-IP would handle that part. So the client traffic is decrypted by the BIG-IP and the decrypted traffic is sent to the server. The return communication from the server to client is encrypted by the BIG-IP and sent back to the client. Thus sparing the server additional load of encryption and decryption. All the server resources can now be fully utilized to serve the application content or any other purpose they are built to do.


Note:

  1. The communication between the server BIG-IP and server is in clear txt.
  2. Servers are setup to listen on unsecure ports ex Port 80.
  3. Since the BIG-IP decrypts the HTTP traffic it has now the ability to read the content (header, txt, cookies etc.) and all the persistence options can be applied. (Source address, Destination address, Cookies, SSL, SIP, Universal, MSRDP)
SSL Pass through - As the name suggests the BIG-IP will just pass the traffic from client to servers absolving itself from any SSL related workload. Instead of forwarding SSL handshakes and connections to the servers directly it will just pass the client traffic to the servers. Usually this setup is used if the applications being served are anti SSL proxy or cannot consume decrypted traffic.

Note - 
  1. Since it’s just pass through LTM cannot read the headers which introduces limitations on persistence. Only non SSL information in the packet can be used to maintain persistence like source ip address, destination ip address.
     
SSL Full Proxy - This method goes by a few names such as SSL Re-Encryption, SSL Bridging and SSL Terminations. In this method the BIG-IP will re-encrypt the traffic before sending it to the servers. Client sends encrypted traffic to BIG-IP , BIG-IP then decrypts it and before send it to the servers or pool members re-encrypts it again. This method is generally used to satisfy the requirement of traffic to be encrypted between the LTM and Servers as well. This requirement might be put in place for additional security or prevent intrusion from within the network. When this method is used the servers will also have to decrypt and encrypt the traffic.


Note –
  1. The communication between the server LTM and server is secure.
  2. Servers are setup to listen on secure ports ex Port 443.
  3. Since the LTM initially decrypts the HTTP traffic it still has the ability to read the content (header, txt, cookies etc.) and all the persistence options can be applied same as SSL Offloading. (Source address, Destination address, Cookies, SSL, SIP, Universal, MSRDP)

Let’s talk about proxy: forward, reverse, half and full proxy

 

What is a proxy?

Proxies are hardware or software solutions that sit between the client and the server and their main goal is to retrieve data out of the Internet on behalf of a user. The most frequent use of the term proxy is to make web browsing anonymous. That’s because proxies sit between your browser and your desired destination and proxy the connection. This means that you connect only to the proxy server and the proxy server connects to the web server, and neither you nor the web server has any awareness of each other.

The proxy can perform some of the following functions:

  • Access control: proxy server administrators may or may not allow certain users to access the Internet through restrictions on their own login or IP addresses, providing the environment with an additional layer of protection.
  • Content filtering: being in the middle of the road, the server also allows, or does not allow, access to certain sites. Among the rules that can be applied are those for blocking specific websites, or even entire categories.
  • Caching: the proxy, after accessing a page, stores the content of the page in its system. After that, other requests to this same page will not have to go to the Internet, because the content is already stored in the proxy’s memory.
  • Privacy: Perhaps what we all associate the term “proxy” with, is anonymous and protected Internet browsing. This is because a proxy server can block scripts, cookies, and other objects that are hosted on websites. In addition, the web server you consult will not know your IP address but the proxy server’s, making your browsing more secure.

Proxies are not all the same. There are different types of proxies:

  • Forward Proxy
  • Reverse Proxy
  • Half Proxy
  • Full Proxy

Forward Proxy

Forward proxies are those that are located between two networks, usually a private internal network and a public network as the Internet. These are often referred to as “mega-proxies” because they managed such high volumes of traffic. Forward proxies are generally HTTP (Web) proxies that provide a number of services but are primarily focused on web content filtering and caching services.

The diagram below shows an example topology of the location of the Forward Proxy (located between the internal network and the Internet).

forward proxy

When one of the clients within the internal network accesses a web server or an application hosted on a remote server, its request first passes through the proxy. Depending on the proxy configuration, this request may be accepted or denied. Let’s assume it is accepted. The proxy then sends the request to the remote servers and from the point of view of the web servers or applications, it is the proxy server that issued the request. So, when the web server or application responds, it will send the response to the proxy server. Once the proxy server receives the response, it forwards it to the client that made the request on the internal network.

Reverse Proxy

A reverse proxy is a server located between a public network (e.g. Internet) and one or more web or application servers. They process requests for applications and content coming in from the public Internet to the internal, private network. Reverse proxies are typically implemented to help increase security, performance, and reliability.

Load balancers (application delivery controllers) are a great example of reverse proxies. A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across multiple servers, increasing capacity (concurrent users) and application reliability.

Reverse proxy

Normally all requests from the internal private network would go directly to Web and Application Servers (W&A servers), and they would send responses directly to Internal Private Network. With a reverse proxy, all requests from the private network will go directly to the reverse proxy, and this last one will send its requests to and receive responses from W&A servers. The reverse proxy will then pass along the appropriate responses to the internal private network.

The main benefits of a reverse proxy are listed below.

  • Load Balancing: Some high-traffic web and application servers need to handle hundreds of thousands (some even millions) of concurrent user or customer requests and deliver the information quickly and reliably. In order for these high-demand applications to be delivered quickly, they usually need to be located on a pool of servers. This is where load balancers play an important role. Load balancing is the capacity of some devices to distribute network traffic or concurrent connections to different servers in a way that maximizes the capacity and speed of application delivery and minimizes server overhead.
  • Protection from attacks: With a reverse proxy between users and servers, it is more difficult for attackers to perform an attack (e.g. DDoS) against servers, as they do not expose their IP address or service port.
  • Global Server Load Balancing (GSLB): It refers to the intelligent distribution of traffic across closest server resources located in multiple geographies. This decreases the distances that requests and responses need to travel, minimizing load times.
  • Caching: Caching is very useful to improve the user experience when browsing through recurring web resources. It refers to the local storage that a server can have about the information requested over and over again by one or more clients (web browser).  By collecting that data locally on a server close to the requesting client, it can be delivered to the client extremely faster than if it had to be retrieved again from the backend server.
  • SSL encryption: The reverse proxy server can decrypt incoming connections and encrypt outgoing connections once it has processed them. This saves resource consumption of the backend servers.

Half Proxy

Half Proxy refers to how a proxy server handles connections, regardless of whether it is a forward or reverse proxy. Let us describe its use in two different ways: the first one regarding how connections are handled. This means that incoming requests are proxied by the device but responses do not go through it, or vice versa, incoming connections go directly to the servers but responses go through the proxy server (this latter form is very rare to be seen, almost all half-proxies fall into the category of reverse proxies). This is why it is called half proxy because in one direction the connections are proxied and in the other, they are not. This deployment is very useful when dealing with streaming application traffic.

Half proxy

The second way in which the use of a half proxy can be described is known as delayed binding. This gives the proxy the ability to examine incoming connections, process them, and determine their destination. Once the proxy knows where to send requests, it ties the connection between client and server so that only the initial requests and the three-way handshake process pass through the proxy; subsequent connections would pass directly without interception from the proxy.

Half proxy

Full Proxy

A full proxy also refers to how connections are handled. The proxy server separates the connections into two parts. One between the client and itself, and one between itself and the servers. For this reason, the proxy server configured as a full proxy must understand the network protocols very well, and therefore implement them, since it is the originator and endpoint for these protocols. The latter is a significant difference between a full proxy architecture and a packet-based architecture.

A perfect example of an appliance that acts as a full proxy is the F5’s solution called BIG-IP Systems. A BIG-IP is a default deny system that can be configured as a full proxy and can have its own TCP connection behavior (buffering, retransmissions, and TCP options). This means that connections between the client and the proxy can be partially or totally different from the connections between the proxy and the server.

Full proxy

What are the different SSL methods for BIG-IP LTM?

What is SSL?

The HTTP protocol is vulnerable to interception by intruders since the data transferred from the web browser to the web server or between two systems is not encrypted but is transferred as plain text. In other words, HTTP protocol is not secure. However, the need has arisen to protect the information transferred between the user’s browser and web servers. Therefore, a more secure version of the HTTP protocol known as HTTPS was implemented, which is nothing more than a combination of HTTP + SSL/TLS. HTTPS ensures that any information transmitted over the network is encrypted and cannot be accessed by anyone.

SSL is the acronym for Secure Sockets Layer, the standard technology for keeping an Internet connection secure, as well as for protecting any confidential information sent between two systems. SSL is a higher-layer security protocol, working at the application layer. By operating at the application layer, SSL can provide the highly granular policy and access control required for secure remote access. SSL accomplishes this by ensuring that all data that is transferred between users and websites or between two systems is unreadable. It uses encryption algorithms to encrypt the data being transmitted and prevent anyone from reading it as it is sent over the connection.

In a couple of minutes, I will explain the most common SSL methods used by a BIG-IP LTM system. But first, I want to explain to you what a BIG-IP system is since it is not as well known as it should be.

What is a BIG-IP system?

A BIG-IP system is a set of application delivery products that work together to ensure high availability, performance enhancement, application security, and access control. One of the main functions of the BIG-IP system is to forward different types of protocol and application traffic to a target server. The BIG-IP system achieves this through its LTM (Local Traffic Manager) module, which can forward traffic directly to a pool of servers using a load-balancing method, or send traffic to a next-hop router, a group of routers, or directly to a selected node in the network.

Other modules available in the BIG-IP system provide critical functions such as applying security policies to network traffic, accelerating HTTP connections, and optimizing connections over a WAN network.

BIG-IP Local Traffic Manager (LTM) transforms your network into a flexible infrastructure for application delivery. It acts as a full proxy between users and application servers, creating a layer of abstraction to secure, optimize, and load balance application traffic. This gives you the flexibility and control to add applications and servers easily, eliminate downtime, improve application performance, and meet your security requirements.

SSL Methods for BIG-IP LTM

BIG-IP is built to handle SSL traffic in a load balancing scenario and meet most of the security requirements effectively. The 3 common SSL configurations that can be set up on LTM device are:

  • SSL Offloading
  • SSL Passthrough
  • SSL Bridging

SSL Offloading

Let’s consider a scenario where you have a client and a web or application server, in typical client-server architecture. The connection is established over HTTPS (with SSL encryption). What you have probably never thought about is the level of resource consumption that these servers have to do to decrypt the requests from the clients, process them and re-encrypt them to send them back to the clients.

Using the SSL Offloading method, the BIG-IP system handles the decryption and re-encryption process. In this method, the client sends the traffic to BIG-IP encrypted. So the client traffic is decrypted by the BIG-IP and the decrypted traffic is sent to the server. The return communication from the server to the client is encrypted by the BIG-IP and sent back to the client.  This saves the server the additional overhead of encryption and decryption. Now, all server resources can be fully utilized for other functions such as serving application content or other functions for which they were primarily intended.

To summarizing, SSL offloading on load balancers such as BIG-IP LTM is a capability they have to relieve a web server of the processing burden of encrypting and decrypting traffic. 



SSL Passthrough

When load balancing encrypted web traffic, one of the main configuration choices is SSL passthrough. SSL passthrough is the action of passing data through a load balancer to a server without decrypting it. SSL passthrough keeps the data encrypted as it travels through the load balancer. So, the configuration of proxy SSL passthrough does not require the installation of an SSL certificate on the load balancer. SSL certificates are installed on the backend server because they handle the SSL connection instead of the load balancer.

In this case, it is the server that will perform the process of decrypting all incoming SSL traffic. Using SSL passthrough requires a higher processing unit on the part of the servers. This is why it is not recommended for larger deployments. It also restricts some capabilities of a load balancer. The SSL proxy pass-through does not inspect traffic or intercept SSL sessions on network devices before reaching the server, as it simply passes along encrypted data.

Let’s talk very briefly about the configuration on BIG-IP LTM related to this method. Since it’s just pass through, LTM cannot read the headers which introduce limitations on persistence. Only non-SSL information in the packet can be used to maintain persistence like source IP address, destination IP address. You should not add client SSL and server SSL profiles. You cannot use an HTTP profile, therefore you cannot optimize layer 7 traffic. Cookie persistency cannot be used.

Usually, this setup is used if the applications being served cannot consume decrypted traffic or when web application security is critical.





SSL Bridging

This term is also known as SSL Re-Encryption or SSL Full Proxy. In this method, the BIG-IP system receives the encrypted incoming traffic and decrypts it for traffic analysis purposes. But before sending it to the destination servers, it re-encrypts the connection. SSL bridging can be useful when the edge device performs a deep-packet inspection to verify that the contents of the SSL-encrypted transmission are safe, or if there are security concerns about unencrypted traffic traversing the internal network.

Key notes about SSL Bridging:

  • Each site has a separate SSL session.
  • Communication in each segment is secure.
  • Servers are configured to listen on secure ports such as port 443.
  • BIG-IP has the capability to read traffic content.
  • All the persistence options can be applied.


iRule

  iRule: -- o iRule is a powerful and flexible feature within the BIG-IP local traffic management (LTM). o IRule is a powerful & flexibl...