Thursday, 12 March 2020

Bigip HA sync problem

Bigip HA sync Problem

we have 2 Units of f5 ver 12 in HA . someone,by mistake start to create objects and a vs in the standby unit. when he saw that he is in the standby unit he started to create the some objects at the active unit and then he pressed on sync to group with the checkbox of overwrite. since then the standby unit is unsynced and we get this error:

Sync Summary Status Awaiting Initial Sync Summary One or more devices awaiting initial config sync Details

  1. F5-02.com is awaiting the initial config sync
  2. Recommended action: Synchronize F5-01.com to group sync-group
we have tried to recreate the a Sync-Failover Device Group but problem still exist.

Approache:--
The message "awaiting the initial config sync" typically indicates that units never were in synced state.

(Maybe due to re-creating the sync-failover device group.)

Please check both units for the device settings (failover, config sync, mirroring).

Local self IPs need to be used for all these properties and the port lockdown should be set to "allow default". Both units are added to the new sync-failover device group?

Device trust is established bi-directional (check device list on both machines).

Make sure both units are on same time ("
  1. ntpq -p
").

Watch the log file for error messages during config sync:

  1. tail -f /var/log/ltm
 Ans:--
we deleted all vs &pools & nodes from the standby device and then sync worked perfect

Configuration sync

Can
you perform a configuration sync?

Have you checked so that the BIG-IP devices are listening over the correct failover unicast address/port? Run the following command on both devices:

  1. netstat -na | grep 1026


The output should look like this:

  1. [root@bigip02:Active:Changes Pending] log netstat -na | grep 1026
  2.  
  3. udp 0 0 192.168.1.32:1026 0.0.0.0:*
  4. udp 0 0 10.10.10.32:1026 0.0.0.0:*

This setting is configured per device under: Device Management ›› Devices ›› [BIG-IP Hostname] ›› Failover Network.

Config sync issues

Config Sync issue (both boxes are staying "disconnected")
Need help... I currently dont have access to the boxes and Im tempted to just call support but trying to avoid it. (Not saying there is anything wrong with calling support but I know Im missing something basic!)


Here are my steps (Im resetting everything):

1. Device Groups >(device group previously setup) put both boxes back to available.

2. Delete the existing device group.

3. Reset Device Trust. Choose Generate New Self-Signed Authority.

4. Device Trust>Peer list. Establish peering. (It is able to see peer no problem.)
5.. Create device groups. "test-sync-failover". Put both devices in "includes". and check Network Failover.

6. Confirm both devices are in the Device List area.

7. Overview>(click self device)>choose "Sync Device to Group">Choose "Overwrite Configuration">Sync


Boxes are showing disconnected. What can I check? Are there a specific log I can look at to find out why they cannot sync? Should I reset the whole darn configuration and start from scratch again?


:Self fixed.



  1. Device Groups >(device group previously setup) put both boxes back to available.
  2. Delete the existing device group.
  3. Reset Device Trust. Choose Generate New Self-Signed Authority.
  4. REBOOT THE VE!!!!!!
  5. Device Trust>Peer list. Establish peering. (It is able to see peer no problem.)
  6. Create device groups. "test-sync-failover". Put both devices in "includes". and check Network Failover.
  7. Confirm both devices are in the Device List area.
  8. Overview>(click self device)>choose "Sync Device to Group">Choose "Overwrite Configuration">Sync


    Understanding BIG-IP CPU usage


    The Traffic Management Microkernel (TMM) processes all load-balanced traffic on the BIG-IP system. TMM runs as a real-time user process within the BIG-IP operating system (TMOS). CPU and memory resources are explicitly provisioned in the BIG-IP configuration.
    Understanding BIG-IP CPU usage
    The following factors influence the manner in which TMM uses the CPU:
    • The number of processors installed in the BIG-IP system
    • The BIG-IP version
    • The modules for which the BIG-IP system is licensed
    CPU utilization on single CPU, single core systems
    CPU resources are explicitly provisioned in the BIG-IP configuration. When TMM is idle or processing low volumes of traffic, TMM yields idle cycles to other processes.
    CPU utilization on multi-CPU / multi-core systems
    Prior to BIG-IP 11.5.0, each logical CPU core is assigned a separate TMM instance, and each core processes both data plane (TMM-specific) tasks and control plane (non-TMM-specific) tasks.
    Beginning in BIG-IP 11.5.0, data plane tasks and control plane tasks use separate logical cores on systems with Intel Hyper-Threading Technology (HT Technology) CPUs. Even-numbered logical cores (hyperthreads) are allocated to TMM, while odd numbered cores are available for other processes.
    Using the tmsh utility to view TMM CPU usage
    1. Log in to the TMOS Shell (tmsh) by typing the following command:
      tmsh
      
    2. To display TMM CPU utilization and other statistical information for TMM instances, type the following tmsh command:
      show /sys tmm-info
      
      For example, the following tmsh command is showing CPU usage for TMM 0.0 (Output truncated):
      Sys::TMM: 0.0
      --------------------------
      CPU Usage Ratio (%)
      Last 5 Seconds 3
      Last 1 Minute 3
      Last 5 Minutes 2
      
      Note
      System CPU utilization is calculated by the following sets of values:
    3. Average over all TMM CPUs (all even CPUs)
    4. Average over ‘all odd CPUs except the last one’ (The reason for leaving out the last CPU is due to an analysis plane that was spiking the last CPU numbers.)
    The higher of these values are presented as the overall system CPU usage.

    Understanding BIG-IP Memory usage
    When administering a BIG-IP system, it is important to understand how the system allocates memory. In general, BIG-IP memory usage falls into the following categories:
    • Traffic Management Microkernel (TMM) memory usage
    • Linux host memory usage
    • Swap usage
    TMM runs as a real-time user process within the Linux host operating system. The BIG-IP system statically assigns memory resources to TMM and potentially to other module-related processes, depending on module provisioning. The remaining memory is available for all other Linux host processes.
    The BIG-IP system creates swap usage space during software installation on disk. Swap space is available to the Linux kernel.

    TMM memory usage
    The BIG-IP data plane includes one or more TMM processes to manage traffic on the BIG-IP system. The BIG-IP system statically assigns memory resources to TMM.
    The following information summarizes TMM memory:
    • The BIG-IP system assigns a dedicated pool of memory to each TMM process.
    • TMM memory is not available for the Linux kernel to reassign to other host processes. The system never considers TMM memory as available.
    • TMM memory cannot be swapped to disk.
    • The TMM memory management subsystem allocates and clears memory pages in the following manner:
    • TMM allocates static memory to hash tables (for example, the connection flow table).
    • TMM dynamically allocates memory pages for temporary objects (for example, persistence records and buffered connection data).
    • Memory sweepers periodically reap unused memory as needed from TMM objects.
    • When possible, TMM caches dynamic allocations to improve performance when new objects require the same allocations.

    Linux memory usage
    The system may allocate remaining memory to other processes on the Linux host and kernel threads.
    The following information summarizes Linux host memory usage:
    • Linux allocates most available memory to buffers and disk caching, which gives the appearance of high memory usage but allows the system to run more efficiently.
    • Linux utilities, such as top and free, may report that only a small amount of memory is free. This is normal behavior; cached memory can be reclaimed quickly if a program needs memory.
    • To see memory used by buffers and disk caching, view the -/+ buffers/cache row where top and free report these memory structures. Add these values to the reported amount of free memory to estimate the total amount of physical memory the processes are not currently using.
    • The Linux kernel sometimes copies memory pages to swap. This is known as swapping memory.

    Swap memory usage
    The following information summarizes swap memory usage:
    • It is normal for a Linux system, including the BIG-IP system, to use a small amount of swap. The Linux kernel sometimes prefers to swap idle processes memory to disk so that more physical memory is available for more active processes, buffers, and caches.
    • Physical memory is much faster than swap, and prioritizing buffers and caches allows the kernel to optimize performance of disk-heavy processes such as databases.
    • A higher percentage of swap use is normal when provisioned modules make heavy use of the disk.
    • Excessive swap usage may be a sign that the system is experiencing memory pressure. You should investigate in the following cases:
    • The system uses a very high percentage of swap memory.
    • The percentage of swap memory usage increases over time.

    Understanding BIG-IP memory statistics
    You can view BIG-IP memory statistics using BIG-IP utilities or Linux command line utilities. It is normal for Linux utilities, such as top and free, to report a small amount of free memory. This expected behavior occurs due to Linux disk caching. F5 recommends that you use the Configuration utility or the TMOS Shell (tmsh) to view memory statistics on the BIG-IP system.
    You can view BIG-IP memory statistics, including TMM memory usage, other (Linux) memory usage, swap usage, and memory allocated to TMM hash tables and cache objects. To do so, use the following utilities:
    • tmsh show /sys memory
    • Configuration utility: Statistics > Module Statistics > Memory

    Memory statistics (BIG-IP 10.x - 11.5.4)
    In BIG-IP 10.x - 11.5.4, the Configuration utility tmsh report memory allocated to buffers and caches as used memory. As a result, it may appear that the host system is using all available memory. The system reports memory statistics in the following ways:
    • System Memory
    • Host Total: The amount of memory available to Linux or non-TMM processes.
    • Host Used: The amount of memory in use by Linux or non-TMM processes.
    • TMM Total: The amount of memory available to TMM processes.
    • TMM Used: The amount of memory in use by TMM processes for traffic management.
    • Subsystem memory/memory pool name
    • Indicates the name and memory utilization of TMM hash tables and cache objects.


    Source:f5.com

    Unavailable vs Disabled pool members


    A site has assigned the ICMP monitor to all nodes and a custom monitor, based on the HTTP template, to a pool of web servers. The HTTP based monitor is working in all cases. The ICMP monitor is failing for 2 of the pool member 5 nodes. All other settings are default. What is the status of the pool members?

    A. All pool members are up since the HTTP based monitor is successful.

    B. All pool members are down since the ICMP based monitor is failing in some cases.

    C. The pool members whose nodes are failing the ICMP based monitor will be marked disabled.

    D. The pool members whose nodes are failing the ICMP based monitor will be marked unavailable.

    The correct answer is apparently D, but why does the monitor mark the pool members as unavailable instead of disabled?

    Ans:---

    BIG-IP administrator can assign a pool member one of the following states:

    • Enabled
    • Disabled
    • Forced offline
    whereas in your case it is the system that assign a pool member one of the following states according to his state of health:

    • Unavailable
    • Available

    So to answer to your question, the monitor mark the pool members as unavailable instead of disabled because that the system that detect that monitor fail and therefore it mark it at unavailable...





    iRule

      iRule: -- o iRule is a powerful and flexible feature within the BIG-IP local traffic management (LTM). o IRule is a powerful & flexibl...