VMware: blank per­for­mance graph issue

Prob­lem / Story

In 2017 we replaced a stor­age sys­tem with anoth­er stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. In this process we stum­bled upon issues which made some hosts unre­spon­sive in VCen­ter (while the VMs were run­ning with­out issues). Before the hosts went unre­spon­sive, the per­for­mance graphs of them start­ed to blank out. So at the moment the issue appeared until it was resolved, any graph con­tin­ued to advance, but had no val­ues list­ed in the cor­re­spond­ing time­frame (left = col­or­ful lines, mid­dle = white space, and after the issue was resolved the col­or­ful lines appeared again). Some times the issue of the blank per­for­mance graph resolved itself, some­times the hosts became unre­spon­sive and VCen­ter greyed them out and trig­gered a HA/FT (High Avail­abil­i­ty / Fault Tol­er­ance) reaction.

Root cause

On the cor­re­spond­ing hosts we had RDMs (Raw Device Map­pings) which are used by Microsoft Clus­ter Ser­vice (there is a knowledge-base arti­cle). The issues showed up when we did some SAN oper­a­tions in VMware (like (auto­mat­ic) scan­ning) of new disks after hav­ing pre­sent­ed new disks to VMware. VMware tried to do some­thing clever with the disks (also dur­ing the boot of a host, so if you use RDMs and boot­ing the host takes a long time, you are in the sit­u­a­tion I describe here). If only a small amount of changes hap­pened at the same time, the issues fixed itself. A large amount of changes caused a HA/FT reaction.

Workaround when the issue shows up

When you see that the per­for­mance graphs start to show blank space and your VMs are still work­ing, go to the clus­ter set­tings and dis­able vSphere HA (High Avail­abil­i­ty): clus­ter -> “Edit Set­tings” -> “Clus­ter Fea­tures” -> remove check­mark in front of “Turn On vSphere HA”. Wait until the graph shows some val­ues again (for all involved hosts) and then enable vSphere HA again.

Solu­tion

To not have this issue show up at all, you need to change some set­tings for the devices on which you have the RDMs. Here is a lit­tle script (small enough to jsut copy&paste it into a shell on the host) which needs the IDs of the devices which are used for the RDMs (atten­tion, let­ters need to be low­er­case) in the “RDMS” vari­able. As we did that on the run­ning sys­tems, and each change of the set­tings caused some action in in he back­ground which made the per­fro­mance graph issue to show up, there is a “lit­tle” sleep between the mak­ing the changes. The amount of sleep depends upon your sit­u­a­tion, the more RDMs are con­fig­ured, the big­ger it needs to be. For us we had 15 of such devices and a sleep of 20 min­utes between each change was enough to not trig­ger a HA/FT reac­tion. The amount of time need­ed in the end is much low­er than in the begin­ning, but as this was more or less an one-off task, this sim­ple ver­sion was good enough (it checks if the set­ting is already active and does noth­ing in this case).

For our use case it was also ben­e­fi­cial to the the path selec­tion pol­i­cy to fixed, so this is also includ­ed in this script. Your use case may be different.

SLEEPTIME=1200              # 20 minutes per LDEV!
# REPLACE THE FOLLOWING IDs   !!! lower case !!!
RDMS="1234567890abcdef12345c42000002a2 1234567890abcdef12345c42000003a3 \
1234567890abcdef12345c42000003a4 1234567890abcdef12345c42000002a5 \
1234567890abcdef12345c42000002a6 1234567890abcdef12345c42000002a7 \
1234567890abcdef12345c42000003a8 1234567890abcdef12345c42000002a9 \
1234567890abcdef12345c42000002aa 1234567890abcdef12345c42000003ab \
1234567890abcdef12345c42000002ac 1234567890abcdef12345c42000003ad \
1234567890abcdef12345c42000002ae 1234567890abcdef12345c42000002af \
1234567890abcdef12345c42000002b0"

for i in $RDMS; do
  LDEV=naa.$i
  echo $LDEV
  RESERVED="$(esxcli storage core device list -d $LDEV | awk '/Perennially/ {print $4}')"
  if [ "$RESERVED" = "false" ]; then
    echo "   setting prerennially reserved to true"
    esxcli storage core device setconfig -d $LDEV --perennially-reserved=true
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
    echo "   setting fixed path"
    esxcli storage nmp device set --device $LDEV --psp VMW_PSP_FIXED                    
  else
    echo "    perennially reserved OK"
  fi
done

Solaris 10/11(.3) boot panic/crash after mov­ing rpool to a new stor­age system

Sit­u­a­tion

The boot disks of some Solaris LDOMs were migrat­ed from one stor­age sys­tem to anoth­er one via ZFS mir­ror­ing the rpool to the new sys­tem and detach­ing the old LUN.

Issue

After reboot with on the new stor­age sys­tem Solaris 10 and 11(.3) pan­ic at boot.

Cause

  • rpool not on slice 0 but on slice 2
  • bug in Solaris when doing such a mir­ror and “just” doing a reboot <- this is the real issue, it seems Solaris can not han­dle a change of the name of the under­ly­ing device for a rpool, as just mov­ing the par­ti­tion­ing to slice 0 is not fix­ing the panic.

Fix

# boot from net­work (or an alter­nate pool which was not yet moved), import/export the pools, boot from the pools
boot net -
# go to shell
# if need­ed: change the par­ti­tion­ing so that slice 0 has the same val­ues as slice 2 (respec­tive­ly make sure the rpool is in slice 0)
zpool import ‑R /tmp/yyy rpool
zpool export rpool
reboot

 

New users in Solaris 10 brand­ed zones on Solaris 11 not han­dled automatically

A col­league noticed that on a Solaris 11 sys­tem a Solaris 10 brand­ed zone “gains” two new dae­mons which are run­ning with UID 16 and 17. Those users are not auto­mat­i­cal­ly added to /etc/passwd, /etc/shadow (and /etc/group)… at least not when the zones are import­ed from an exist­ing Solaris 10 zone.

I added the two users (netadm, netcfg) and the group (netadm) to the Solaris 10 brand­ed zones by hand (copy&paste of the lines in /etc/passwd, /etc/shadow, /etc/group + run pwconv) for our few Solaris 10 brand­ed zones on Solaris 11.

Increase of DNS requests after a crit­i­cal patch update of Solaris 10

Some weeks ago we installed crit­i­cal patch updates (CPU) on a Solaris 10 sys­tem (inter­nal sys­tem, a year of CPU to install, noth­ing in it affect­ing us or was con­sid­ered a secu­ri­ty risk, we decid­ed to apply this one regard­less to not fall behind too much). After­wards we noticed that two zones are doing a lot of DNS requests. We noticed this already before the zones went into pro­duc­tion and we con­fig­ured a pos­i­tive time to live in nscd.conf for “hosts”. Addi­tion­al­ly we noticed a lot of DNS requests for IPv6 address­es (AAAA lookups), while absolute­ly no IPv6 address is con­fig­ured in the zones (not even for local­host… and those are exclu­sive IP zones). Appar­ent­ly with one of the patch­es in the CPU the behav­iour changed regard­ing the caching, I am not sure if we had the AAAA lookups before.

Today I got some time to debug this. After adding caching of “ipn­odes” in addi­tion to “hosts” (and I con­fig­ured a neg­a­tive time to live for both at the same time), the DNS requests came down to a sane amount.

For the AAAA lookups I have not found a solu­tion. By my read­ing of the doc­u­men­ta­tion I would assume there are not IPv6 DNS lookups if there is not IPv6 address configured.

Com­plete net­work loss on Solaris 10u10 CPU 2012-10 on vir­tu­al­ized T4‑2

The prob­lem I see at work: A T4‑2 with 3 guest LDOMs, vir­tu­al­ized disks and net­works lost the com­plete net­work con­nec­tiv­i­ty “out of the blue” once, and maybe “spo­radic” direct­ly after a cold boot. After a lot of dis­cus­sion with Ora­cle, I have the impres­sion that we have two prob­lems here.

1st prob­lem:
Total net­work loss of the machine (no zone or guest LDOM or the pri­ma­ry LDOM was able to have receive or send IP pack­ets). This hap­pened once. No idea how to repro­duce it. In the logs we see the mes­sage “[ID 920994 kern.warning] WARNING: vnetX: exceed­ed num­ber of per­mit­ted hand­shake attempts (5) on chan­nel xxx”. Accord­ing to Ora­cle this is sup­posed to be fixed in 148677 – 01 which will come with Solaris 10u11. They sug­gest­ed to use a vsw inter­face instead of a vnet inter­face on the pri­ma­ry domain to at least low­er the prob­a­bil­i­ty of this prob­lem hit­ting us. They were not able to tell us how to repro­duce the prob­lem (seems to be a race con­di­tion, at least I get this impres­sion based upon the descrip­tion of the Ora­cle engi­neer han­dling the SR). Only a reboot helped to get the prob­lem solved. I was told we are the only client which report­ed this kind of prob­lem, the patch for this prob­lem is based upon an inter­nal bugre­port from inter­nal tests.

2nd prob­lem:
After cold boots some­times some machines (not all) are not able to con­nect to an IP on the T4. A reboot helps, as does remov­ing an inter­face from an aggre­gate and direct­ly adding it again (see below for the sys­tem con­fig). To try to repro­duce the prob­lem, we did a lot of warm reboots of the pri­ma­ry domain, and the prob­lem nev­er showed up. We did some cold reboots, and the prob­lem showed up once.

In case some­one else sees one of those prob­lems on his machines too, please get in con­tact with me to see what we have in com­mon to try to track this down fur­ther and to share info which may help in maybe repro­duc­ing the problems.

Sys­tem setup:

  • T4‑2 with 4 HBAs and 8 NICs (4 * igb on-board, 4 * nxge on addi­tion­al net­work card)
  • 3 guest LDOMs and one io+control domain (both in the pri­ma­ry domain)
  • the guest LDOMs use SAN disks over the 4 HBAs
  • the pri­ma­ry domain uses a mir­rored zpool on SSDs
  • 5 vswitch in the hypervisor
  • 4 aggre­gates (aggr1 – aggr4 with L2-policy), each one with one igb and one nxge NIC
  • each aggre­gate is con­nect­ed to a sep­a­rate vswitch (the 5th vswitch is for machine-internal communication)
  • each guest LDOM has three vnets, each vnets con­nect­ed to a vswitch (1 guest LDOM has aggr1+2 only for zones (via vnets), 2 guest LDOMs have aggr 3+4 only for zones (via vnets), and all LDOMs have aggr2+3 (via vnets) for global-zone com­mu­ni­ca­tion, all LDOMs are addi­tion­al­ly con­nect­ed to the machine-internal-only vswitch via the 3rd vnet)
  • pri­ma­ry domain uses 2 vnets con­nect­ed to the vswitch which is con­nect­ed to aggr2 and aggr3 (con­sis­ten­cy with the oth­er LDOMs on this machine) and has no zones
  • this means each enti­ty (pri­ma­ry domain, guest LDOMs and each zone) has two vnets in and those two vnets are con­fig­ured in a link-based IPMP set­up (vnet-linkprop=phys-state)
  • each vnet has VLAN tag­ging con­fig­ured in the hyper­vi­sor (with the zones being in dif­fer­ent VLANs than the LDOMs)

The pro­posed change by Ora­cle is to replace the 2 vnet inter­faces in the pri­ma­ry domain with 2 vsw inter­faces (which means to do VLAN tag­ging in the pri­ma­ry domain direct­ly instead of in the vnet con­fig). To have IPMP work­ing this means to have vsw-linkprop=phys-state. We have two sys­tems with the same set­up, on one sys­tem we already changed this and it is work­ing as before. As we don’t know how to repro­duce the 1st prob­lem, we don’t know if the prob­lem is fixed or not, respec­tive­ly what the prob­a­bil­i­ty is to get hit again by this problem.

Ideas / sug­ges­tions / info welcome.