At work we have a Solaris 8 with a UFS which told the application that it can not create new files. The df command showed plenty if free inodes, and there was also enough space free in the FS. The reason that the application got the error was that while there where still plenty of fragments free, no free block was available anymore. You can not create a new file only with fragments, you need to have at least one free block for each new file.
To see the number of free blocks of a UFS you can call “fstyp ‑v | head ‑18” and look at the value behind “nbfree”.
To get this working again we cleaned up the FS a little bit (compressing/deleting log files), but this is only a temporary solution. Unluckily we can not move this application to a Solaris 10 with ZFS, so I was playing around a little bit to see what we can do.
First I made a histogram of the file sizes. The backup of the FS I was playing with had a little bit more than 4 million files in this FS. 28.5% of them where smaller than or equal 512 bytes, 31.7% where smaller than or equal 1k (fragment size), 36% smaller than or equal 8k (block size) and 74% smaller than or equal 16k. The following graph shows in red the critical part, files which need a block and produce fragments, but can not life with only fragments.
Then I played around with newfs options for this one specific FS with this specific data mix. Changing the number of inodes did not change much the outcome for our problem (as expected). Changing the optimization from “time” to “space” (and restoring all the data from backup into the empty FS) gave us 1000 more free blocks. On a FS which had 10 Mio free blocks when empty this is not much, but we expect that the restore consumes less fragments and more full blocks than the live-FS of the application (we can not compare, as the content of the live-FS changed a lot since we had the problem). We assume that e.g. the logs of the application are split over a lot of fragments instead of full blocks, due to small writes to the logs by the application. The restore should write all the data in big chunks, so our expectation is that the FS will use more full blocks and less fragments. Because of this we expect that the live-FS with this specific data mix could benefit from changing the optimization.
I also played around with the fragment size. The expectation was that it will only change what is reported in the output of df (reducing the reported available space for the same amount of data). Here is the result:
The difference between 1k (default) and 2k is not much. For 8k we would have to much unused space lost. The fragment size of 4k looks like it is acceptable to get a better monitoring status of this particular data mix.
Based upon this we will probably create a new FS with a fragment size of 4k and we will probably switch the optimization directly to “space”. This way we will have a better reporting on the fill level of the FS for our data mix (but we will not be able to fully use the real space of the FS) and as such our monitoring should alert us in time to do a cleanup of the FS or to increase the size of the FS.