ADSL RAM … fi­nally aban­doned (but with good news)

As I already wrote, the­or­et­ic­ally ADSL RAM is avail­able at my place. The ana­lysis of the situ­ation re­vealed first that the ISP side of my line uses out­dated hard­ware. After the tech­ni­cian I know un­of­fi­cially took care about it (re­motely switch­ing me to a dif­fer­ent port), I have seen an im­me­di­ate im­prove­ment of the sig­nal to noise ra­tio. It is about 20 dB bet­ter.

Un­for­tu­nately this was not enough to be able to switch to the rate ad­apt­ive mode. Ac­cord­ing to their data­base the line length al­lows to give me 1.5 MBit. My line is run­ning already at 2 MBit and my ADSL mo­dem tells me it could do 8 MBit, so I dis­agree a bit with their data­base.

As the tech­ni­cian agrees with me, the next step would be to tem­por­ary move my house by some hun­dred meters to­wards the ISP en­d­point of the line, un­for­tu­nately the higher man­age­ment seems to be hav­ing some busi­ness ideas with our re­gion (FTTT, Fiber To The Town (which means we will prob­ably get 16 MBit via ADSL) … but maybe even FTTH), so they are now mon­it­or­ing the data­base for such changes since a while.

I have the im­pres­sion they seem to pre­vent such changes to the data­base be­cause they think that if people get 2 MBit (in­stead of noth­ing, large parts of a town nearby does not even have the slow­est ADSL con­nec­tion) or 8 MBit (in­stead of 2 MBit), they are not in­ter­ested in get­ting FTTH (or 16 MBit). To­gether with their IPTV ini­ti­at­ive I do not really un­der­stand it. To get their IPTV, you need to have at least a 8 MBit line. With 8 MBit you can only cover one TV at SD res­ol­u­tion (at least with their IPTV of­fer), if you want HD res­ol­u­tion, you need to switch to their VDSL stuff (which is not avail­able in our town). What people are do­ing cur­rently is to switch to a cable pro­vider where they can get about 32 MBit (I do not switch, switch­ing is a risky ac­tion here, I rather stay with a slow con­nec­tion that to have no con­nec­tion at all for some months). With 32 MBit (and TV) people have less a need to switch to fiber (and pay 150 EUR for the work to get fiber into the house) than with 2 MBit or noth­ing.

The fi­nal out­come is, that the tech­ni­cian I know does not want to ask someone to play with the data­base to move my house tem­por­ary (which I can un­der­stand). The good part of those news is, that I may get more than 8 MBit in the not so dis­tant fu­ture (the cur­rent plan­ning is to fin­ish the FTTT work un­til au­tumn).

StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

Sol­aris UFS full while df shows plenty of free space/​inodes

At work we have a Sol­aris 8 with a UFS which told the ap­plic­a­tion that it can not cre­ate new files. The df com­mand showed plenty if free in­odes, and there was also enough space free in the FS. The reason that the ap­plic­a­tion got the er­ror was that while there where still plenty of frag­ments free, no free block was avail­able any­more. You can not cre­ate a new file only with frag­ments, you need to have at least one free block for each new file.

To see the num­ber of free blocks of a UFS you can call “fstyp –v | head –18” and look at the value be­hind “nbfree”.

To get this work­ing again we cleaned up the FS a little bit (compressing/​deleting log files), but this is only a tem­por­ary solu­tion. Un­luck­ily we can not move this ap­plic­a­tion to a Sol­aris 10 with ZFS, so I was play­ing around a little bit to see what we can do.

First I made a his­to­gram of the file sizes. The backup of the FS I was play­ing with had a little bit more than 4 mil­lion files in this FS. 28.5% of them where smal­ler than or equal 512 bytes, 31.7% where smal­ler than or equal 1k (frag­ment size), 36% smal­ler than or equal 8k (block size) and 74% smal­ler than or equal 16k. The fol­low­ing graph shows in red the crit­ical part, files which need a block and pro­duce frag­ments, but can not life with only frag­ments.


Then I played around with newfs op­tions for this one spe­cific FS with this spe­cific data mix. Chan­ging the num­ber of in­odes did not change much the out­come for our prob­lem (as ex­pec­ted). Chan­ging the op­tim­iz­a­tion from “time” to “space” (and restor­ing all the data from backup into the empty FS) gave us 1000 more free blocks. On a FS which had 10 Mio free blocks when empty this is not much, but we ex­pect that the re­store con­sumes less frag­ments and more full blocks than the live-​FS of the ap­plic­a­tion (we can not com­pare, as the con­tent of the live-​FS changed a lot since we had the prob­lem). We as­sume that e.g. the logs of the ap­plic­a­tion are split over a lot of frag­ments in­stead of full blocks, due to small writes to the logs by the ap­plic­a­tion. The re­store should write all the data in big chunks, so our ex­pect­a­tion is that the FS will use more full blocks and less frag­ments. Be­cause of this we ex­pect that the live-​FS with this spe­cific data mix could be­ne­fit from chan­ging the op­tim­iz­a­tion.

I also played around with the frag­ment size. The ex­pect­a­tion was that it will only change what is re­por­ted in the out­put of df (re­du­cing the re­por­ted avail­able space for the same amount of data). Here is the res­ult:


The dif­fer­ence between 1k (de­fault) and 2k is not much. For 8k we would have to much un­used space lost. The frag­ment size of 4k looks like it is ac­cept­able to get a bet­ter mon­it­or­ing status of this par­tic­u­lar data mix.

Based upon this we will prob­ably cre­ate a new FS with a frag­ment size of 4k and we will prob­ably switch the op­tim­iz­a­tion dir­ectly to “space”. This way we will have a bet­ter re­port­ing on the fill level of the FS for our data mix (but we will not be able to fully use the real space of the FS) and as such our mon­it­or­ing should alert us in time to do a cleanup of the FS or to in­crease the size of the FS.

DTrace probes for the Linuxu­lator up­dated

If someone had a look at the earlier post about DTrace probes for the Linuxu­lator: I up­dated the patch at the same place. The dif­fer­ence between the pre­vi­ous one is that some D–scripts are fixed now to do what I meant, spe­cially the ones which provide stat­ist­ics out­put.

Com­pil­ing Samba 3.5.8 with AD sup­port on Sol­aris 10 u8

If someone needs a samba which is able to com­mu­nic­ate with an AD 2008 server on a Sol­aris 10 sys­tem… here is how I did it.


  • /​opt/​SUNWspro con­tains the Stu­dio 12 com­piler
  • tar­balls of open­ldap–stable–20100719 (2.4.23), heimdal-1.4, samba-3.5.8
  • ex­port PATH=/opt/SUNWspro/bin:/usr/xpg6/bin:/usr/xpg4/bin:/usr/perl5/bin:/usr/bin:/usr/openwin/bin:/bin:/usr/sfw/bin:/usr/sfw/sbin:/sbin:/usr/sbin:/usr/sadm/admin/bin:/usr/sadm/bin:/usr/java/jre/bin:/usr/ccs/bin:/usr/ucb CC=cc CXX=CC
  • DEST=/path/to/final/location

Com­pil­ing everything

openldap-​stable-​20100719 (2.4.23)

ex­port CPPFLAGS=”-I/usr/sfw/include” LDFLAGS=”-L/usr/sfw/lib –R/​usr/​sfw/​lib”
./​configure –pre­fix=$DEST/openldap-2.4.23 –disable-​slapd
make de­pend
make in­stall


./​configure –prefix=$DEST/heimdal-1.4 –with-openldap=$DEST/openldap-2.4.23 –with-hdbdir=$DEST/heimdal-instance/var/heimdal –sysconfdir=$DEST/heimdal-instance/etc
cd lib/​hcrypto/​libtommath

Un­for­tu­nately heimdal-1.4 does not con­tain all the files you need. As of this writ­ing (if you try to do this a lot later, you may get more re­cent ver­sions which may or may not work with heim­dal 1.4) I was able to down­load them from

cd ../../..
make in­stall
mk­dir –p $DEST/​heimdal-​instance/​var/​heimdal $DEST/​heimdal-​instance/​etc


ex­port CPPFLAGS=”-I$DEST/openldap-2.4.23/include” LDFLAGS=”-L$DEST/openldap-2.4.23/lib –R$DEST/openldap-2.4.23/lib –R$DEST/samba-3.5.8/lib –R$DEST/heimdal-1.4/lib”
./​configure –prefix=$DEST/samba-3.5.8 –sysconfdir=$DEST/samba-instance/etc –localstatedir=$DEST/samba-instance/var –with-privatedir=$DEST/samba-instance/private –with-lockdir=$DEST/samba-instance/var/locks –with-statedir=$DEST/samba-instance/var/locks –with-cachedir=$DEST/samba-instance/var/locks –with-piddir=$DEST/samba-instance/var/locks –with-ncalrpcdir=$DEST/samba-instance/var/ncalrpc –with-configdir=$DEST/samba-instance/con­fig –with-​ldap –with-krb5=$DEST/heimdal-1.4 –with-​ads –with-​quotas –with-​aio-​support –with-shared-modules=vfs_zfsacl
gmake in­stall

After that you have a samba in $DEST/samba-3.5.8, the con­fig for it should be put into $DEST/​samba-​instance/​config and if you need to have a cus­tom krb4.conf you can put it int $DEST/​heimdal-​instance/​etc/​.