Solaris: remove unus­able SAN disks


Your Solaris sys­tem may have “lost” some SAN disks (for what­ev­er rea­son). If you can not get them back and arrive at the stage where you want to cleanup (if the sys­tem can not do it auto­mat­i­cal­ly), you want to have a solu­tion which does not need much think­ing about rarely exe­cut­ed tasks.


for i in $(luxadm -e port | cut -d : -f 1); do
  luxadm -e forcelip $i
  sleep 10

for i in $(cfgadm -al -o show_FCP_dev | awk '/unusable/ {print $1}' | cut -d , -f 1); do
  cfgadm -c unconfigure -o unusable_SCSI_LUN $i

devfsadm -Cv

If you still have some devices here, you may have to reboot to release some locks or such.

Send to Kin­dle

VMware: set­ting the stor­age device queue depth (HDS fibre chan­nel disks)


In 2017 we replaced an IBM stor­age sys­dem with an Hitachi Van­tara stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. A rec­om­men­da­tion from Hitachi Van­tara was to set the queue depth for fibre chan­nel disks to 64.


Here is a lit­tle script which does that. Due to issues as described in a pre­vi­ous post which caused HA/FA (High Avail­abil­i­ty / Fault Tol­er­ance) reac­tions in VMware to trig­ger, we played it safe and added a lit­tle sleep after each change. The script also checks of the queue depth is already set to the desired val­ue and does noth­ing in this case. It’s small enough to just copy&paste it direct­ly into a shell on the host.

SLEEPTIME=210 # 3.5 minutes  !!!! only if all RDMs on the host are reserved!!!
for LDEV in $(esxcli storage core device list | grep "HITACHI Fibre Channel Disk" | awk '{gsub(".*\\(",""); gsub("\\).*",""); print}'); do
  echo $LDEV
  DEPTH="$(esxcli storage core device list -d $LDEV | awk '/outstanding/ {print $8}')"
  if [ "$DEPTH" -ne $TARGET_DEPTH ]; then
    echo "   setting queue depth $TARGET_DEPTH"
    esxcli storage core device set -d $LDEV -O $TARGET_DEPTH
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
    echo "    queue depth OK"
Send to Kin­dle

VMware: blank per­for­mance graph issue

Prob­lem / Sto­ry

In 2017 we replaced a stor­age sys­tem with anoth­er stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. In this process we stum­bled upon issues which made some hosts unre­spon­sive in VCen­ter (while the VMs were run­ning with­out issues). Before the hosts went unre­spon­sive, the per­for­mance graphs of them start­ed to blank out. So at the moment the issue appeared until it was resolved, any graph con­tin­ued to advance, but had no val­ues list­ed in the cor­re­spond­ing time­frame (left = col­or­ful lines, mid­dle = white space, and after the issue was resolved the col­or­ful lines appeared again). Some times the issue of the blank per­for­mance graph resolved itself, some­times the hosts became unre­spon­sive and VCen­ter greyed them out and trig­gered a HA/FT (High Avail­abil­i­ty / Fault Tol­er­ance) reac­tion.

Root cause

On the cor­re­spond­ing hosts we had RDMs (Raw Device Map­pings) which are used by Microsoft Clus­ter Ser­vice (there is a knowledge‐base arti­cle). The issues showed up when we did some SAN oper­a­tions in VMware (like (auto­mat­ic) scan­ning) of new disks after hav­ing pre­sent­ed new disks to VMware. VMware tried to do some­thing clever with the disks (also dur­ing the boot of a host, so if you use RDMs and boot­ing the host takes a long time, you are in the sit­u­a­tion I describe here). If only a small amount of changes hap­pened at the same time, the issues fixed itself. A large amount of changes caused a HA/FT reac­tion.

Workaround when the issue shows up

When you see that the per­for­mance graphs start to show blank space and your VMs are still work­ing, go to the clus­ter set­tings and dis­able vSphere HA (High Avail­abil­i­ty): clus­ter -> “Edit Set­tings” -> “Clus­ter Fea­tures” -> remove check­mark in front of “Turn On vSphere HA”. Wait until the graph shows some val­ues again (for all involved hosts) and then enable vSphere HA again.


To not have this issue show up at all, you need to change some set­tings for the devices on which you have the RDMs. Here is a lit­tle script (small enough to jsut copy&paste it into a shell on the host) which needs the IDs of the devices which are used for the RDMs (atten­tion, let­ters need to be low­er­case) in the “RDMS” vari­able. As we did that on the run­ning sys­tems, and each change of the set­tings caused some action in in he back­ground which made the per­fro­mance graph issue to show up, there is a “lit­tle” sleep between the mak­ing the changes. The amount of sleep depends upon your sit­u­a­tion, the more RDMs are con­fig­ured, the big­ger it needs to be. For us we had 15 of such devices and a sleep of 20 min­utes between each change was enough to not trig­ger a HA/FT reac­tion. The amount of time need­ed in the end is much low­er than in the begin­ning, but as this was more or less an one‐off task, this sim­ple ver­sion was good enough (it checks if the set­ting is already active and does noth­ing in this case).

For our use case it was also ben­e­fi­cial to the the path selec­tion pol­i­cy to fixed, so this is also includ­ed in this script. Your use case may be dif­fer­ent.

SLEEPTIME=1200              # 20 minutes per LDEV!
# REPLACE THE FOLLOWING IDs   !!! lower case !!!
RDMS="1234567890abcdef12345c42000002a2 1234567890abcdef12345c42000003a3 \
1234567890abcdef12345c42000003a4 1234567890abcdef12345c42000002a5 \
1234567890abcdef12345c42000002a6 1234567890abcdef12345c42000002a7 \
1234567890abcdef12345c42000003a8 1234567890abcdef12345c42000002a9 \
1234567890abcdef12345c42000002aa 1234567890abcdef12345c42000003ab \
1234567890abcdef12345c42000002ac 1234567890abcdef12345c42000003ad \
1234567890abcdef12345c42000002ae 1234567890abcdef12345c42000002af \

for i in $RDMS; do
  echo $LDEV
  RESERVED="$(esxcli storage core device list -d $LDEV | awk '/Perennially/ {print $4}')"
  if [ "$RESERVED" = "false" ]; then
    echo "   setting prerennially reserved to true"
    esxcli storage core device setconfig -d $LDEV --perennially-reserved=true
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
    echo "   setting fixed path"
    esxcli storage nmp device set --device $LDEV --psp VMW_PSP_FIXED                    
    echo "    perennially reserved OK"

Send to Kin­dle

Essen Hackathon 2018

Again this time of the year where we had the plea­sure of doing the Essen Hackathon in a nice weath­er con­di­tion (sun­ny, not too hot, no rain). A lot of peo­ple here, about 20. Not only FreeB­SD com­mit­ters showed up, but also con­trib­u­tors (biggest group was 3 peo­ple who work on iocage/libiocage, and some indi­vid­u­als with inter­est in var­i­ous top­ics like e.g. SCTP / net­work pro­to­cols, and oth­er top­ics I unfor­tu­nate­ly for­got).

The top­ics of inter­est this year:

  • work­flows / process­es
  • Wiki
  • jail‐ / con­tain­er man­age­ment (pkg­base, iocage, dock­er)
  • ZFS
  • graph­ics
  • doc­u­men­ta­tion
  • bug squash­ing
  • CA trust store for the base sys­tem

I was first work­ing with Allan on mov­ing for­ward with a CA trust store for the base sys­tem (tar­get: make fetch work out of the box for TLS con­nec­tions – cur­rent­ly you will get an error that the cer­tifi­cate can not val­i­dat­ed, if you do not have the ca_nss_root port (or any oth­er source of trust) installed and a sym­link in base to the PEM file). We have inves­ti­gat­ed how base‐openssl, ports-openssl and libressl are set­up (ports‐openssl is the odd one in the list, it looks in LOCALBASE/openssl for his default trust store, while we would have expect­ed it would have a look in LOCALBASE/etc/ssl). As no ports‐based ssl lib is look­ing into /etc/ssl, we were safe to do what­ev­er we want in base with­out break­ing the behav­ior of ports which depend upon the ports‐based ssl libs. With that the cur­rent design is to import a set of CAs into SVN – one cert file per CA – and a way to update them (for the secu­ri­ty offi­cer and for users), black­list CAs, and have base‐system and local CAs merged into the base‐con­fig. The expec­ta­tion is that Allan will be able to present at least a pro­to­type at EuroB­D­Con.

I also had a look with the iocage/libiocage devel­op­ers at some issues I have with iocage. The nice thing is, the cur­rent ver­sion of libiocage already solves the issue I see (I just have to change my process­es a lit­tle bit). Some more cleanup is need­ed on their side until they are ready for a port of libiocage. I am look­ing for­ward to this.

Addi­tion­al­ly I got some time to look at the list of PRs with patch­es I want­ed to look at. Out of the 17 PRs I toke note of, I have closed 4 (one because it was over­come by events). One is in progress (com­mit­ted to -cur­rent, but I want to MFC that). One addi­tion­al one (from the iocage guys) I for­ward­ed to jamie@ for review. I also noticed that Kristof fixed some bugs too.

On the social side we had dis­cus­sions dur­ing BBQ, pizza/pasta/…, and a restau­rant vis­it. As always Kristof was telling some fun­ny sto­ries (or at least telling sto­ries in a fun­ny way… 😉 ). This off course trig­gered some oth­er fun­ny sto­ries from oth­er peo­ple. All in all my bot­tom line of this years Essen Hackathon is (as for the oth­er 2 I vis­it­ed): fun, sun and progress for FreeB­SD.

By bring­ing cake every time I went there, it seems that I cre­at­ed a tra­di­tion of this. So any­one should already plan to reg­is­ter for the next one – if noth­ing bad hap­pens, I will bring cake again.

Send to Kin­dle

Solaris 10/11(.3) boot panic/crash after mov­ing rpool to a new stor­age sys­tem


The boot disks of some Solaris LDOMs were migrat­ed from one stor­age sys­tem to anoth­er one via ZFS mir­ror­ing the rpool to the new sys­tem and detach­ing the old LUN.


After reboot with on the new stor­age sys­tem Solaris 10 and 11(.3) pan­ic at boot.


  • rpool not on slice 0 but on slice 2
  • bug in Solaris when doing such a mir­ror and “just” doing a reboot <- this is the real issue, it seems Solaris can not han­dle a change of the name of the under­ly­ing device for a rpool, as just mov­ing the par­ti­tion­ing to slice 0 is not fix­ing the pan­ic.


# boot from net­work (or an alter­nate pool which was not yet moved), import/export the pools, boot from the pools
boot net -
# go to shell
# if need­ed: change the par­ti­tion­ing so that slice 0 has the same val­ues as slice 2 (respec­tive­ly make sure the rpool is in slice 0)
zpool import -R /tmp/yyy rpool
zpool export rpool


Send to Kin­dle