Alexander Leidinger

Just another weblog


Solaris UFS full while df shows plenty of free space/inodes

At work we have a Solaris 8 with a UFS which told the appli­ca­tion that it can not cre­ate new files. The df com­mand showed plenty if free inodes, and there was also enough space free in the FS. The rea­son that the appli­ca­tion got the error was that while there where still plenty of frag­ments free, no free block was avail­able any­more. You can not cre­ate a new file only with frag­ments, you need to have at least one free block for each new file.

To see the num­ber of free blocks of a UFS you can call “fstyp –v | head –18″ and look at the value behind “nbfree”.

To get this work­ing again we cleaned up the FSlit­tle bit (compressing/deleting log files), but this is only a tem­po­rary solu­tion. Unluck­ily we can not move this appli­ca­tion to a Solaris 10 with ZFS, so I was play­ing around a lit­tle bit to see what we can do.

First I made a his­togram of the file sizes. The backup of the FS I was play­ing with had a lit­tle bit more than 4 mil­lion files in this FS. 28.5% of them where smaller than or equal 512 bytes, 31.7% where smaller than or equal 1k (frag­ment size), 36% smaller than or equal 8k (block size) and 74% smaller than or equal 16k. The fol­low­ing graph shows in red the crit­i­cal part, files which need a block and pro­duce frag­ments, but can not life with only fragments.


Then I played around with newfs options for this one spe­cific FS with this spe­cific data mix. Chang­ing the num­ber of inodes did not change much the out­come for our prob­lem (as expected). Chang­ing the opti­miza­tion from “time” to “space” (and restor­ing all the data from backup into the empty FS) gave us 1000 more free blocks. On a FS which had 10 Mio free blocks when empty this is not much, but we expect that the restore con­sumes less frag­ments and more full blocks than the live-FS of the appli­ca­tion (we can not com­pare, as the con­tent of the live-FS changed a lot since we had the prob­lem). We assume that e.g. the logs of the appli­ca­tion are split over a lot of frag­ments instead of full blocks, due to small writes to the logs by the appli­ca­tion. The restore should write all the data in big chunks, so our expec­ta­tion is that the FS will use more full blocks and less frag­ments. Because of this we expect that the live-FS with this spe­cific data mix could ben­e­fit from chang­ing the optimization.

I also played around with the frag­ment size. The expec­ta­tion was that it will only change what is reported in the out­put of df (reduc­ing the reported avail­able space for the same amount of data). Here is the result:


The dif­fer­ence between 1k (default) and 2k is not much. For 8k we would have to much unused space lost. The frag­ment size of 4k looks like it is accept­able to get a bet­ter mon­i­tor­ing sta­tus of this par­tic­u­lar data mix.

Based upon this we will prob­a­bly cre­ate a new FS with a frag­ment size of 4k and we will prob­a­bly switch the opti­miza­tion directly to “space”. This way we will have a bet­ter report­ing on the fill level of the FS for our data mix (but we will not be able to fully use the real space of the FS) and as such our mon­i­tor­ing should alert us in time to do a cleanup of the FS or to increase the size of the FS.


Tags: , , , , , , , , ,

DTrace probes for the Lin­ux­u­la­tor updated

If some­one had a look at the ear­lier post about DTrace probes for the Lin­ux­u­la­tor: I updated the patch at the same place. The dif­fer­ence between the pre­vi­ous one is that some D–scripts are fixed now to do what I meant, spe­cially the ones which pro­vide sta­tis­tics output.

Tags: , ,