Solaris: script to check var­i­ous set­tings of a sys­tem if they com­ply to some pre-defined settings

Prob­lem

If you set­up a sys­tem, you want to make sure that it com­plies to a pre-defined con­fig. You can do that with some con­fig­u­ra­tion man­age­ment sys­tem, but there are cas­es where it use­ful to do that out­side of this context.

Solu­tion

The shell script below I start­ed to write in 2008. Over time (until 2016) it extend­ed into some­thing which is able to out­put a report of over 1000 items. You can con­fig­ure it via ${HOME}/.check_host.cfg and /etc/check_host.cfg (it reads both in this order, first con­fig wins and oth­er con­fig is not read). You can use option “-h” to see the usage text. Option “-n” sup­press­es mes­sages which help to fix issues, “-a” prints out sim­ple HTML instead of text.

Solaris: script to cre­ate com­mands to set­up LDOMs based upon out­put from “ldm ls”

Prob­lem

You have a LDOM which you want to clone to some­where else and all you have to per­form that is the ldm com­mand on the tar­get system.

Solu­tion

Down­load the AWK script below. Use the out­put of “ldm ls ‑l ‑p <ldom>” as the input of this AWK script. The out­put will be a list of com­mands to re-create the con­fig for VDS, VDISK, VSW and NETWORK.

I wrote this in 2013, so changes to the out­put of “ldm ls” since then are not account­ed for.

Win­dows: remove one item from filesys­tem ACL

Pow­er­Shell snip­pets to remove a group/user from a filesys­tem ACL in Win­dows (and com­pare how to do it in Linux/FreeBSD/Solaris).

Prob­lem

There may be a fold­er (with bro­ken inher­i­tance) with files / direc­to­ries where you want to make sure that there is no spe­cif­ic group / item included.

Solu­tion

Here are some Pow­er­Shell snip­pets to solve this.

Pop­u­late $fold­ers with all direc­to­ries in the cur­rent directory:

$folders =  get-childitem . -directory

To test with one folder:

$folders = "X:\path\to\folder"

Pow­er­Shell to list fold­ers with BUILTIN\Users in the ACL (to see which item will be affected):

foreach ($dir in $folders) { $value = get-acl $dir | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $dir} }

Print ACL of before and “to be” after removal (but not remov­ing anything):

foreach ($item in $folders) { $value = get-acl $item | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $item; $ACL = (get-item $item).getAccessControl('Access'); $ACL.SetAccessRuleProtection($true, $true); echo $ACL |Select-object -ExpandProperty Access; $ACL = (get-item $item).getAccessControl('Access'); $ACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} |%{$acl.RemoveAccessRule($_)}; echo $ACL |Select-object -ExpandProperty Access } }

Set the ACL (dis­able inher­i­tance (con­vert cur­rent set­tings to explic­it ACL) and remove BUILTIN\Users):

foreach ($item in $folders) { $value = get-acl $item | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $item; $ACL = (get-item $item).getAccessControl('Access'); $ACL.SetAccessRuleProtection($true, $true); Set-Acl -Path $item -AclObject $ACL; $ACL = (get-item $item).getAccessControl('Access'); $ACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} |%{$acl.RemoveAccessRule($_)}; Set-Acl -Path $item -AclObject $ACL } }

How would this be solved in Solaris?

setfacl -d <entry> *

How would this be solved in FreeBSD/Linux?

setfacl -x <entry> *

Solaris: remove unus­able SAN disks

Prob­lem

Your Solaris sys­tem may have “lost” some SAN disks (for what­ev­er rea­son). If you can not get them back and arrive at the stage where you want to cleanup (if the sys­tem can not do it auto­mat­i­cal­ly), you want to have a solu­tion which does not need much think­ing about rarely exe­cut­ed tasks.

Solu­tion

for i in $(luxadm -e port | cut -d : -f 1); do
  luxadm -e forcelip $i
  sleep 10
done

for i in $(cfgadm -al -o show_FCP_dev | awk '/unusable/ {print $1}' | cut -d , -f 1); do
  cfgadm -c unconfigure -o unusable_SCSI_LUN $i
done

devfsadm -Cv

If you still have some devices here, you may have to reboot to release some locks or such.

VMware: set­ting the stor­age device queue depth (HDS fibre chan­nel disks)

Prob­lem

In 2017 we replaced an IBM stor­age sys­dem with an Hitachi Van­tara stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. A rec­om­men­da­tion from Hitachi Van­tara was to set the queue depth for fibre chan­nel disks to 64.

Solu­tion

Here is a lit­tle script which does that. Due to issues as described in a pre­vi­ous post which caused HA/FA (High Avail­abil­i­ty / Fault Tol­er­ance) reac­tions in VMware to trig­ger, we played it safe and added a lit­tle sleep after each change. The script also checks of the queue depth is already set to the desired val­ue and does noth­ing in this case. It’s small enough to just copy&paste it direct­ly into a shell on the host.

SLEEPTIME=210 # 3.5 minutes  !!!! only if all RDMs on the host are reserved!!!
TARGET_DEPTH=64
for LDEV in $(esxcli storage core device list | grep "HITACHI Fibre Channel Disk" | awk '{gsub(".*\\(",""); gsub("\\).*",""); print}'); do
  echo $LDEV
  DEPTH="$(esxcli storage core device list -d $LDEV | awk '/outstanding/ {print $8}')"
  if [ "$DEPTH" -ne $TARGET_DEPTH ]; then
    echo "   setting queue depth $TARGET_DEPTH"
    esxcli storage core device set -d $LDEV -O $TARGET_DEPTH
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
  else
    echo "    queue depth OK"
  fi
done