Similarities between
VIPRION and Standard F5 appliances:
Layer 1 interface infromation such as port speed should be gathered at the Hypervisor/Host level rather than the vCMP guest. For example as of January 13 2016, the DC4 vCMP GUEST for HCM shows all interfaces as 10gb and the Hypervisor shows the interfaces as 1gb, the hypervisor has the correct information as it handles the physical interface and the guest only sees backplane connections. At the Guest level the lowest level network information it is concerned with are VLANs.
One blade out of all blades in the chassis is considered the 'primary blade'. When SSH'ed to the management IP, or connecting to the web interface you'll automatically be connected to the primary blade. Most commands are valid when you aren't on the primary blade, however if you are consoled into a blade and try a like tmsh load sys config you may get an error indicating you need to execute the command from the primary blade. To determine which blade is the primary issue the command tmsh show sys cluster and look toward the top of the output for the "Primary Slot Id".
Most
configuration, especially day to day add/move/change work are exactly the same
on a VIPRION as the other on a normal F5 appliance. This includes VIP
create, pool modification, iRules, persistence, load balancing method and
health checks. VIPRION software ISO files are the same as a normal
appliance binary.
Differences:
Layer 1 interface infromation such as port speed should be gathered at the Hypervisor/Host level rather than the vCMP guest. For example as of January 13 2016, the DC4 vCMP GUEST for HCM shows all interfaces as 10gb and the Hypervisor shows the interfaces as 1gb, the hypervisor has the correct information as it handles the physical interface and the guest only sees backplane connections. At the Guest level the lowest level network information it is concerned with are VLANs.
·
For hardware platform information
check the Hypervisor/Host rather than the guest. For example using the
command tmsh show sys hardware at the guest level will return a blade
type of Z101 which means 'VCMP Guest" rather than the actual blade model
number.
·
Also noteworthy is that
when forced offline the self-ips may respond to ping but the guest is unable to
ping the next hop router. We saw this particular behavior in version
11.5.2 on a vCMP guest and it conflicts with the AskF5 SOL15122. Know
this issue exists can save time troubeshooting, before forcing offline verify
you have management port connectivity.
VIPRION Chassis
As of January 2016 we use a single model of
VIPRION Chassis in SAP SaaS datacenters, the C2400. The C2400 chassis has slots
for 4 blades. The intelligence of the VIPRION is in the blades, rathe
rthan the chassis. The chassis stores no configuration information.
Our interaction with the chassis is fairly limited: Examples of what we
might do with the chassis once it is deployed; power supply or fan replacement.
F5 does not support mixing different models of
blades in the same chassis. For example we can't have B2100 blades and
B2250 blades in the same chassis.
Blade Overview
tmsh show sys hardware |
grep Type is the command to list
the model of blades. This command should be used at the Hypervisor level.
A109 =2100. A113=B2150. A112=B2250.
Command syntax or configuration and
troubleshooting doesn't vary between blade models.
B2100 blades are end of sale by F5, no longer
available for order. With the recent VIPRION blade installation work in
AMS, ROT and Ashburn we now only have B2100 blades on the HCM load balancer in
Chandler. B2100 blades have 16gb of ram, 1 300gb hard drive and 1 Quad
Core Processor, providing 8 vCPUs (a vCPU is a hyperthreaded processor
core].
B2150 blades have 32gb of ram, 1 400gb
solid-state drive, and 1 Quad Core processor (8 vCPUs).
B2250 blades have 64gp of ram, 1 800gb
solid-state drive, and 1 10-Core Proccessor (10 vCPU). B2250 blades have
4 40gb connections, we use a 40gb breakout cable that provides us with 4 10gb
connections. Physical ports show up in the hypervisor unbundled, for
example port 2.1 shows up in the Hypervisor as 1.1, 1,2, 1,3 and 1.4.
Physical port 2.2 shows up as port 1.5, 1.6, 1.7 and 1.8. in case you are
troubleshooting a connectivity issue with remote hands, its worth noting
that when standing in front of the chassis, a numbered link light for each
of the 10gb connections are visible above the physical 40gb interface.
Working with VIPRION Blades
One blade out of all blades in the chassis is considered the 'primary blade'. When SSH'ed to the management IP, or connecting to the web interface you'll automatically be connected to the primary blade. Most commands are valid when you aren't on the primary blade, however if you are consoled into a blade and try a like tmsh load sys config you may get an error indicating you need to execute the command from the primary blade. To determine which blade is the primary issue the command tmsh show sys cluster and look toward the top of the output for the "Primary Slot Id".
To view blade status,
use the command tmsh show sys cluster all blades should be enabled and
available if they are functioning properly.
To determine blade types use the command tmsh
show sys hardware at the hypervisor level.
All blades should have a console connection to a
console server/digi . Console access provides direct access to the
hypervisor.
For console access to
the vCMP guest, you must first log into the Hypervisor and then access the guest
via the vconsole command. Syntax is vconsole <guestname>
<primaryblade#> for example : vconsole dc4lbc01 2 (where 2
is the current primary blade)
Using Commands Across
Blades
tmsh commands can be used without alteration.
The F5 WebUI also is used as normal.
To use a blash command
across all blades you'll need to preface the command with clsh. For example to
propertly reboot a VIPRION, you should use the command clsh
reboot. A reboot at the guest level only reboots that paritcular
guest. A reboot at the hypervisor level reboots all guests and the
hypervisor.
Redundancy
Hypervisors should never be configured as an HA
pair, all Hypervisors are currently stand alone (an issue with the HCP Ashburn
Hypervisors being configured as an HA pair was resolved in Q4 of 2015).
vCMP guests are configured as HA Pairs just
like normal.
Between the two above points, its worth noting
that a VIPRION with multiple guests can host both "active" and
"standby" guests. Before performing disruptive tasks at the
hypervisor level (reboots, software uprades, etc) its worth logging into the
guests to determine their active and standby status as is relates to the
hypervisor you are about to work on.
No comments:
Post a Comment