Win­dows: remove one item from filesys­tem ACL

Pow­er­Shell snip­pets to remove a group/user from a filesys­tem ACL in Win­dows (and com­pare how to do it in Linux/FreeBSD/Solaris).

Prob­lem

There may be a fold­er (with bro­ken inher­i­tance) with files / direc­to­ries where you want to make sure that there is no spe­cif­ic group / item included.

Solu­tion

Here are some Pow­er­Shell snip­pets to solve this.

Pop­u­late $fold­ers with all direc­to­ries in the cur­rent directory:

$folders =  get-childitem . -directory

To test with one folder:

$folders = "X:\path\to\folder"

Pow­er­Shell to list fold­ers with BUILTIN\Users in the ACL (to see which item will be affected):

foreach ($dir in $folders) { $value = get-acl $dir | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $dir} }

Print ACL of before and “to be” after removal (but not remov­ing anything):

foreach ($item in $folders) { $value = get-acl $item | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $item; $ACL = (get-item $item).getAccessControl('Access'); $ACL.SetAccessRuleProtection($true, $true); echo $ACL |Select-object -ExpandProperty Access; $ACL = (get-item $item).getAccessControl('Access'); $ACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} |%{$acl.RemoveAccessRule($_)}; echo $ACL |Select-object -ExpandProperty Access } }

Set the ACL (dis­able inher­i­tance (con­vert cur­rent set­tings to explic­it ACL) and remove BUILTIN\Users):

foreach ($item in $folders) { $value = get-acl $item | Select-object -ExpandProperty Access | where { $_.IdentityReference -eq "BUILTIN\Users"} | Select -Expand IdentityReference; if ($value) {echo $item; $ACL = (get-item $item).getAccessControl('Access'); $ACL.SetAccessRuleProtection($true, $true); Set-Acl -Path $item -AclObject $ACL; $ACL = (get-item $item).getAccessControl('Access'); $ACL.Access | where {$_.IdentityReference -eq "BUILTIN\Users"} |%{$acl.RemoveAccessRule($_)}; Set-Acl -Path $item -AclObject $ACL } }

How would this be solved in Solaris?

setfacl -d <entry> *

How would this be solved in FreeBSD/Linux?

setfacl -x <entry> *

VMware 6.5 and 6.7 update fail­ure – rea­son and fix

Prob­lem

If you want to update an ESXi host (or you have upgrad­ed the VMware vSphere appli­ance and try to enable HA on some hosts), some­times it may be the case that it fails (most like­ly when you down­load the update either from VMware via esx­cli com­mand and a https source, or local down­load from the vCen­ter – instead of hav­ing the update on a data­s­tore and point­ing esx­cli to the data­s­tore). If you inves­ti­gate this (in the case of fail­ure to enable HA you may not have a look at esxupdate.log, but if you just updat­ed you vCen­ter appli­ance and the hosts are still on the old ver­sion, just go there and have a look), you may stum­ble on some VMware knowl­edge base arti­cle which tells that this may be a disk space issue on the ESXi host. If you then have a look at the host, you see plen­ty of free space.

Rea­son

You can not see this issue after it has failed, it is only observ­able dur­ing the update process. The rea­son for this is, that the update process is cre­at­ing a ramdisk where it tries to store the update tem­po­rary (at least if the hyper­vi­sor is not installed on some­thing which is detect­ed as a real disk – if you have it installed on a SD card which is detect­ed by the ESXi ker­nel on a USB con­nec­tion, then it is not a real disk in this case; the same is true if it is installed on a SSD behind an USB con­nec­tion, I haven’t found a way to convince/force the installer or the hyper­vi­sor after the instal­la­tion to assume it is a SSD or at least a real disk were it does not need to care about wear-leveling like on a SD card, feel free to add a com­ment to point me to a how­to). This ramdisk is then too small and the update fails.

Solu­tion

Put the update on a data­s­tore and run the update via the esx­cli com­mand. If you use the remote source fea­ture of esx­cli, you will find the down­load URL in esxupdate.log. In case the update comes from the vCen­ter, you can at least find the VIP name in the esxupdate.log and you can use the “find” com­mand to search it in the vCen­ter shell and then scp it to a (shared) data­s­tore on a/the host.

Solaris: remove unus­able SAN disks

Prob­lem

Your Solaris sys­tem may have “lost” some SAN disks (for what­ev­er rea­son). If you can not get them back and arrive at the stage where you want to cleanup (if the sys­tem can not do it auto­mat­i­cal­ly), you want to have a solu­tion which does not need much think­ing about rarely exe­cut­ed tasks.

Solu­tion

for i in $(luxadm -e port | cut -d : -f 1); do
  luxadm -e forcelip $i
  sleep 10
done

for i in $(cfgadm -al -o show_FCP_dev | awk '/unusable/ {print $1}' | cut -d , -f 1); do
  cfgadm -c unconfigure -o unusable_SCSI_LUN $i
done

devfsadm -Cv

If you still have some devices here, you may have to reboot to release some locks or such.

VMware: set­ting the stor­age device queue depth (HDS fibre chan­nel disks)

Prob­lem

In 2017 we replaced an IBM stor­age sys­dem with an Hitachi Van­tara stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. A rec­om­men­da­tion from Hitachi Van­tara was to set the queue depth for fibre chan­nel disks to 64.

Solu­tion

Here is a lit­tle script which does that. Due to issues as described in a pre­vi­ous post which caused HA/FA (High Avail­abil­i­ty / Fault Tol­er­ance) reac­tions in VMware to trig­ger, we played it safe and added a lit­tle sleep after each change. The script also checks of the queue depth is already set to the desired val­ue and does noth­ing in this case. It’s small enough to just copy&paste it direct­ly into a shell on the host.

SLEEPTIME=210 # 3.5 minutes  !!!! only if all RDMs on the host are reserved!!!
TARGET_DEPTH=64
for LDEV in $(esxcli storage core device list | grep "HITACHI Fibre Channel Disk" | awk '{gsub(".*\\(",""); gsub("\\).*",""); print}'); do
  echo $LDEV
  DEPTH="$(esxcli storage core device list -d $LDEV | awk '/outstanding/ {print $8}')"
  if [ "$DEPTH" -ne $TARGET_DEPTH ]; then
    echo "   setting queue depth $TARGET_DEPTH"
    esxcli storage core device set -d $LDEV -O $TARGET_DEPTH
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
  else
    echo "    queue depth OK"
  fi
done

VMware: blank per­for­mance graph issue

Prob­lem / Story

In 2017 we replaced a stor­age sys­tem with anoth­er stor­age sys­tem (actu­al­ly, we replaced the com­plete SAN infra­struc­ture). We han­dled it by attach­ing both stor­age sys­tems to VMware (v5.5) and migrat­ing the data­s­tores. In this process we stum­bled upon issues which made some hosts unre­spon­sive in VCen­ter (while the VMs were run­ning with­out issues). Before the hosts went unre­spon­sive, the per­for­mance graphs of them start­ed to blank out. So at the moment the issue appeared until it was resolved, any graph con­tin­ued to advance, but had no val­ues list­ed in the cor­re­spond­ing time­frame (left = col­or­ful lines, mid­dle = white space, and after the issue was resolved the col­or­ful lines appeared again). Some times the issue of the blank per­for­mance graph resolved itself, some­times the hosts became unre­spon­sive and VCen­ter greyed them out and trig­gered a HA/FT (High Avail­abil­i­ty / Fault Tol­er­ance) reaction.

Root cause

On the cor­re­spond­ing hosts we had RDMs (Raw Device Map­pings) which are used by Microsoft Clus­ter Ser­vice (there is a knowledge-base arti­cle). The issues showed up when we did some SAN oper­a­tions in VMware (like (auto­mat­ic) scan­ning) of new disks after hav­ing pre­sent­ed new disks to VMware. VMware tried to do some­thing clever with the disks (also dur­ing the boot of a host, so if you use RDMs and boot­ing the host takes a long time, you are in the sit­u­a­tion I describe here). If only a small amount of changes hap­pened at the same time, the issues fixed itself. A large amount of changes caused a HA/FT reaction.

Workaround when the issue shows up

When you see that the per­for­mance graphs start to show blank space and your VMs are still work­ing, go to the clus­ter set­tings and dis­able vSphere HA (High Avail­abil­i­ty): clus­ter -> “Edit Set­tings” -> “Clus­ter Fea­tures” -> remove check­mark in front of “Turn On vSphere HA”. Wait until the graph shows some val­ues again (for all involved hosts) and then enable vSphere HA again.

Solu­tion

To not have this issue show up at all, you need to change some set­tings for the devices on which you have the RDMs. Here is a lit­tle script (small enough to jsut copy&paste it into a shell on the host) which needs the IDs of the devices which are used for the RDMs (atten­tion, let­ters need to be low­er­case) in the “RDMS” vari­able. As we did that on the run­ning sys­tems, and each change of the set­tings caused some action in in he back­ground which made the per­fro­mance graph issue to show up, there is a “lit­tle” sleep between the mak­ing the changes. The amount of sleep depends upon your sit­u­a­tion, the more RDMs are con­fig­ured, the big­ger it needs to be. For us we had 15 of such devices and a sleep of 20 min­utes between each change was enough to not trig­ger a HA/FT reac­tion. The amount of time need­ed in the end is much low­er than in the begin­ning, but as this was more or less an one-off task, this sim­ple ver­sion was good enough (it checks if the set­ting is already active and does noth­ing in this case).

For our use case it was also ben­e­fi­cial to the the path selec­tion pol­i­cy to fixed, so this is also includ­ed in this script. Your use case may be different.

SLEEPTIME=1200              # 20 minutes per LDEV!
# REPLACE THE FOLLOWING IDs   !!! lower case !!!
RDMS="1234567890abcdef12345c42000002a2 1234567890abcdef12345c42000003a3 \
1234567890abcdef12345c42000003a4 1234567890abcdef12345c42000002a5 \
1234567890abcdef12345c42000002a6 1234567890abcdef12345c42000002a7 \
1234567890abcdef12345c42000003a8 1234567890abcdef12345c42000002a9 \
1234567890abcdef12345c42000002aa 1234567890abcdef12345c42000003ab \
1234567890abcdef12345c42000002ac 1234567890abcdef12345c42000003ad \
1234567890abcdef12345c42000002ae 1234567890abcdef12345c42000002af \
1234567890abcdef12345c42000002b0"

for i in $RDMS; do
  LDEV=naa.$i
  echo $LDEV
  RESERVED="$(esxcli storage core device list -d $LDEV | awk '/Perennially/ {print $4}')"
  if [ "$RESERVED" = "false" ]; then
    echo "   setting prerennially reserved to true"
    esxcli storage core device setconfig -d $LDEV --perennially-reserved=true
    echo "   sleeping $SLEEPTIME"
    sleep $SLEEPTIME
    echo "   setting fixed path"
    esxcli storage nmp device set --device $LDEV --psp VMW_PSP_FIXED                    
  else
    echo "    perennially reserved OK"
  fi
done