As I already wrote, theoretically ADSL RAM is available at my place. The analysis of the situation revealed first that the ISP side of my line uses outdated hardware. After the technician I know unofficially took care about it (remotely switching me to a different port), I have seen an immediate improvement of the signal to noise ratio. It is about 20 dB better.
Unfortunately this was not enough to be able to switch to the rate adaptive mode. According to their database the line length allows to give me 1.5 MBit. My line is running already at 2 MBit and my ADSL modem tells me it could do 8 MBit, so I disagree a bit with their database.
As the technician agrees with me, the next step would be to temporary move my house by some hundred meters towards the ISP endpoint of the line, unfortunately the higher management seems to be having some business ideas with our region (FTTT, Fiber To The Town (which means we will probably get 16 MBit via ADSL) … but maybe even FTTH), so they are now monitoring the database for such changes since a while.
I have the impression they seem to prevent such changes to the database because they think that if people get 2 MBit (instead of nothing, large parts of a town nearby does not even have the slowest ADSL connection) or 8 MBit (instead of 2 MBit), they are not interested in getting FTTH (or 16 MBit). Together with their IPTV initiative I do not really understand it. To get their IPTV, you need to have at least a 8 MBit line. With 8 MBit you can only cover one TV at SD resolution (at least with their IPTV offer), if you want HD resolution, you need to switch to their VDSL stuff (which is not available in our town). What people are doing currently is to switch to a cable provider where they can get about 32 MBit (I do not switch, switching is a risky action here, I rather stay with a slow connection that to have no connection at all for some months). With 32 MBit (and TV) people have less a need to switch to fiber (and pay 150 EUR for the work to get fiber into the house) than with 2 MBit or nothing.
The final outcome is, that the technician I know does not want to ask someone to play with the database to move my house temporary (which I can understand). The good part of those news is, that I may get more than 8 MBit in the not so distant future (the current planning is to finish the FTTT work until autumn).
At work we have a Solaris 8 with a UFS which told the application that it can not create new files. The df command showed plenty if free inodes, and there was also enough space free in the FS. The reason that the application got the error was that while there where still plenty of fragments free, no free block was available anymore. You can not create a new file only with fragments, you need to have at least one free block for each new file.
To see the number of free blocks of a UFS you can call “fstyp -v | head -18” and look at the value behind “nbfree”.
To get this working again we cleaned up the FS a little bit (compressing/deleting log files), but this is only a temporary solution. Unluckily we can not move this application to a Solaris 10 with ZFS, so I was playing around a little bit to see what we can do.
First I made a histogram of the file sizes. The backup of the FS I was playing with had a little bit more than 4 million files in this FS. 28.5% of them where smaller than or equal 512 bytes, 31.7% where smaller than or equal 1k (fragment size), 36% smaller than or equal 8k (block size) and 74% smaller than or equal 16k. The following graph shows in red the critical part, files which need a block and produce fragments, but can not life with only fragments.
Then I played around with newfs options for this one specific FS with this specific data mix. Changing the number of inodes did not change much the outcome for our problem (as expected). Changing the optimization from “time” to “space” (and restoring all the data from backup into the empty FS) gave us 1000 more free blocks. On a FS which had 10 Mio free blocks when empty this is not much, but we expect that the restore consumes less fragments and more full blocks than the live-FS of the application (we can not compare, as the content of the live-FS changed a lot since we had the problem). We assume that e.g. the logs of the application are split over a lot of fragments instead of full blocks, due to small writes to the logs by the application. The restore should write all the data in big chunks, so our expectation is that the FS will use more full blocks and less fragments. Because of this we expect that the live-FS with this specific data mix could benefit from changing the optimization.
I also played around with the fragment size. The expectation was that it will only change what is reported in the output of df (reducing the reported available space for the same amount of data). Here is the result:
The difference between 1k (default) and 2k is not much. For 8k we would have to much unused space lost. The fragment size of 4k looks like it is acceptable to get a better monitoring status of this particular data mix.
Based upon this we will probably create a new FS with a fragment size of 4k and we will probably switch the optimization directly to “space”. This way we will have a better reporting on the fill level of the FS for our data mix (but we will not be able to fully use the real space of the FS) and as such our monitoring should alert us in time to do a cleanup of the FS or to increase the size of the FS.
If someone had a look at the earlier post about DTrace probes for the Linuxulator: I updated the patch at the same place. The difference between the previous one is that some D–scripts are fixed now to do what I meant, specially the ones which provide statistics output.
If someone needs a samba which is able to communicate with an AD 2008 server on a Solaris 10 system… here is how I did it.
- /opt/SUNWspro contains the Studio 12 compiler
- tarballs of openldap–stable-20100719 (2.4.23), heimdal-1.4, samba-3.5.8
- export PATH=/opt/SUNWspro/bin:/usr/xpg6/bin:/usr/xpg4/bin:/usr/perl5/bin:/usr/bin:/usr/openwin/bin:/bin:/usr/sfw/bin:/usr/sfw/sbin:/sbin:/usr/sbin:/usr/sadm/admin/bin:/usr/sadm/bin:/usr/java/jre/bin:/usr/ccs/bin:/usr/ucb CC=cc CXX=CC
export CPPFLAGS=”-I/usr/sfw/include” LDFLAGS=”-L/usr/sfw/lib -R/usr/sfw/lib”
./configure –prefix=$DEST/openldap-2.4.23 –disable-slapd
./configure –prefix=$DEST/heimdal-1.4 –with-openldap=$DEST/openldap-2.4.23 –with-hdbdir=$DEST/heimdal-instance/var/heimdal –sysconfdir=$DEST/heimdal-instance/etc
Unfortunately heimdal-1.4 does not contain all the files you need. As of this writing (if you try to do this a lot later, you may get more recent versions which may or may not work with heimdal 1.4) I was able to download them from
mkdir -p $DEST/heimdal-instance/var/heimdal $DEST/heimdal-instance/etc
export CPPFLAGS=”-I$DEST/openldap-2.4.23/include” LDFLAGS=”-L$DEST/openldap-2.4.23/lib -R$DEST/openldap-2.4.23/lib -R$DEST/samba-3.5.8/lib -R$DEST/heimdal-1.4/lib”
./configure –prefix=$DEST/samba-3.5.8 –sysconfdir=$DEST/samba-instance/etc –localstatedir=$DEST/samba-instance/var –with-privatedir=$DEST/samba-instance/private –with-lockdir=$DEST/samba-instance/var/locks –with-statedir=$DEST/samba-instance/var/locks –with-cachedir=$DEST/samba-instance/var/locks –with-piddir=$DEST/samba-instance/var/locks –with-ncalrpcdir=$DEST/samba-instance/var/ncalrpc –with-configdir=$DEST/samba-instance/config –with-ldap –with-krb5=$DEST/heimdal-1.4 –with-ads –with-quotas –with-aio-support –with-shared-modules=vfs_zfsacl
After that you have a samba in $DEST/samba-3.5.8, the config for it should be put into $DEST/samba-instance/config and if you need to have a custom krb4.conf you can put it int $DEST/heimdal-instance/etc/.