Tarsnap us­age stat­ist­ics

The more time passes with tarsnap, the more im­press­ive it is.

Fol­low­ing is a list of all my privately used sys­tems (2 ma­chines which only host jails – here named Prison1 and Prison2 – and sev­eral jails – here named ac­cord­ing to their func­tion­al­ity) to­gether with some tarsnap stat­ist­ics. For each backup tarsnap prints out some stat­ist­ics. The amount of un­com­pressed stor­age space of all archives of this ma­chine, the com­pressed stor­age space of all archives, the unique un­com­pressed stor­age space of all archives, the unique com­pressed stor­age space of all archives, and the same mount of info for the cur­rent archive. The unique stor­age space is after de­du­plic­a­tion. The most in­ter­est­ing in­form­a­tion is the unique and com­pressed one. For a spe­cific archive it shows the amount of data which is dif­fer­ent to all other archives, and for the total amount it tells how much stor­age space is used on the tarsnap server. I do not backup all data in tarsnap. I do a full backup on ex­ternal stor­age (zfs snap­shot + zfs send | zfs re­ceive) once in a while and tarsnap is only for the stuff which could change daily or is very small (my mails be­long to the first group, the con­fig of ap­plic­a­tions or the sys­tem to the second group). At the end of the post there is also an over­view of the money I have spend so far in tarsnap for the backups.

At­ten­tion: the fol­low­ing graphs are dis­play­ing small val­ues in KB, while the text is telling about sizes in MB or even GB!

Prison1

The backup of one day cov­ers 1.1 GB of un­com­pressed data, the sub­trees I backup are /​etc, /​usr/​local/​etc, /​home, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​space/​jails/​flavours and a sub­ver­sion check­out of /​usr/​src (ex­clud­ing the ker­nel com­pile dir­ect­ory; I backup this as I have local modi­fic­a­tions to FreeBSD). If I want to have all days un­com­pressed on my hard­disk, I would have to provide 10 GB of stor­age space. Com­pressed this comes down to 2.4 GB, unique un­com­pressed this is 853 MB, and unique com­pressed this is 243 MB. The fol­low­ing graph splits this up into all the backups I have as of this writ­ting. I only show the unique val­ues, as in­clud­ing the total val­ues would make the unique val­ues dis­ap­pear in the graph (val­ues too small). 

chart


In this graph we see that I have a con­stant rate of new data. I think this is mostly ref­er­ences to already stored data (/​usr/​src be­ing the most likely cause of this, noth­ing changed in those dir­ect­or­ies).

Internal-​DNS

One day cov­ers 7 MB of un­com­pressed data, all archives take 56 MB un­com­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /​var/​named, and /var/db/mergemaster.mtree.

chart


This graph is strange. I have no idea why there is so much data for the second and the last day. Noth­ing changed.

Outgoing-​Postfix

One day cov­ers 8 MB of un­com­pressed data, all archives take 62 MB un­com­pressed, unique and com­pressed this comes down to 1.5 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /​var/​spool/​postfix, and /var/db/mergemaster.mtree.

chart


This looks not bad. I was send­ing a lot of mails on the 25th. And the days in the middle I was not send­ing much.

IMAP

One day cov­ers about 900 MB of un­com­pressed data, all archives take 7.2 GB un­com­pressed, unique and com­pressed this comes down to 526 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​home (mail folders) and /​usr/​local/​share/​courier-​imap.

chart


Ob­vi­ously I have a not so small amount of change in my mail­box. As my spam­fil­ter is work­ing nicely this is dir­ectly cor­rel­ated to mails from vari­ous mailing­lists (mostly FreeBSD).

MySQL (for the Horde web­mail in­ter­face)

One day cov­ers 100 MB of un­com­pressed data, all archives take 801 MB un­com­pressed, unique and com­pressed this comes down to 19 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /​var/​db/​mysql and /var/db/mergemaster.mtree.

chart


This is cor­rel­ated with the use of my web­mail in­ter­face, and as such is also cor­rel­ated with the amount of mails I get and send. Ob­vi­ously I did not use my web­mail in­ter­face at the week­end (as the backup cov­ers the change of the pre­vi­ous day).

Web­mail

One day cov­ers 121 MB of un­com­pressed data, all archives take 973 MB un­com­pressed, unique and com­pressed this comes down to 33 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​usr/​local/​www/​horde and /​home.

chart


This one is strange again. Noth­ing in the data changed.

Samba

One day cov­ers 10 MB of un­com­pressed data, all archives take 72 MB un­com­pressed, unique and com­pressed this comes down to 1.9 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree and /​var/​db/​samba.

chart


Here we see the changes to /​var/​db/​samba, this should be mostly my Wii ac­cess­ing mul­ti­me­dia files there.

Proxy

One day cov­ers 31 MB of un­com­pressed data, all archives take 223 MB un­com­pressed, unique and com­pressed this comes down to 6.6 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg and /var/db/mergemaster.mtree.

chart


This is also a strange graph. Again, noth­ing changed there (the cache dir­ect­ory is not in the backup).

phpMy­Ad­min

One day cov­ers 44 MB of un­com­pressed data, all archives take 310 un­com­pressed, unique and com­pressed this comes down to 11 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​home and /​usr/​local/​www/​phpMyAdmin.

chart


And again a strange graph. No changes in the FS.

Gal­lery

One day cov­ers 120 MB of un­com­pressed data, all archives take 845 MB un­com­pressed, unique and com­pressed this comes down to 25 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​usr/​local/​www/​gallery2 and /​home/​gallery (ex­clud­ing some parts of /​home/​gallery).

chart


This one is OK. Friends and Fam­ily ac­cess­ing the pic­tures.

Prison2

One day cov­ers 7 MB of un­com­pressed data, all archives take 28 MB un­com­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​space/​jails/​flavours and /​home.

chart


This one looks strange to me again. Same reas­ons as with the pre­vi­ous graphs.

Incoming-​Postfix

One day cov­ers 56 MB of un­com­pressed data, all archives take 225 MB un­com­pressed, unique and com­pressed this  comes down to 5.4 MB. This cov­ers /​etc, /​usr/​local/​etc, /​usr/​local/​www/​postfixadmin, /​root/​, /​var/​db/​pkg, /​var/​db/​mysql, /​var/​spool/​postfix and /var/db/mergemaster.mtree.

chart


This graph looks OK to me.

Blog-​and-​XMPP

One day cov­ers 59 MB of un­com­pressed data, all archives take 478 MB un­com­pressed, unique and com­pressed this comes down to 14 MB. This cov­ers /​etc, /​usr/​local/​etc, /​root, /​home, /​var/​db/​pkg, /var/db/mergemaster.mtree, /​var/​db/​mysql and /​var/​spool/​ejabberd (yes, no backup of the web-​data, I have it in an­other jail, no need to backup it again). 

chart


With the MySQL and XMPP data­bases in the backup, I do not think this graph is wrong.

Totals

The total amount of stored data per sys­tem is: 

chart


Costs

Since I use tarsnap (8 days), I have spend 38 cents, most of this is band­width cost for the trans­fer of the ini­tial backup (29.21 cents). Ac­cord­ing to the graphs, I am cur­rently at about 8 – 14 cents per week (or about half a dol­lar per month) for my backups (I still have a ma­chine to add, and this may in­crease the amount in a sim­ilar way than the Prison1 sys­tem with 2 – 3 jails). The amount of money spend in US-​cents (roun­ded!) per day is: 

chart


StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

ZFS & power-​failure: stable

At the week­end there was a power–fail­ure at our disaster-​recovery-​site. As everything should be con­nec­ted to the UPS, this should not have had an im­pact… un­for­tu­nately the guys re­spons­ible for the cabling seem to have not provided enough power con­nec­tions from the UPS. Res­ult: one of our stor­age sys­tems (all volumes in sev­eral RAID5 vir­tual disks) for the test sys­tems lost power, 10 hard­disks switched into failed state when the power was stable again (I was told there where sev­eral small power-​failures that day). After telling the soft­ware to have a look at the drives again, all phys­ical disks where ac­cep­ted.

All volumes on one of the vir­tual disks where dam­aged (ac­tu­ally, one of the vir­tual disks was dam­aged) bey­ond re­pair and we had to re­cover from backup.

All ZFS based moun­t­points on the good vir­tual disks did not show bad be­ha­vior (zfs clear + zfs scrub for those which showed check­sum er­rors to make us feel bet­ter). For the UFS based ones… some caused a panic after re­boot and we had to run fsck on them be­fore try­ing a second boot.

We spend a lot more time to get UFS back on­line, than get­ting ZFS back on­line. After this ex­per­i­ence it looks like our fu­ture Sol­aris 10u8 in­stalls will be with root on ZFS (our work­sta­tions are already like this, but our serv­ers are still at Sol­aris 10u6).

EMC^2/Legato Net­worker 7.5.1.6 status

We up­dated Net­worker 7.5.1.4 to 7.5.1.6 as the Networker-​Support thought it will fix at least one of our prob­lems (“ghost” volumes in the DB). Un­for­tu­nately the up­date does not fix any bug we see in our en­vir­on­ment.

Spe­cially for the “post-​command runs 1 minute after pre-​command even if the backup is not finished”-bug this is not sat­is­fy­ing: no con­sist­ent DB backup where the ap­plic­a­tion has to be stopped to­gether with the DB to get a con­sist­ent snap­shot (FS+DB in sync).

SUN Open­Stor­age present­a­tion

At work (cli­ent site) SUN made a present­a­tion about their Open­Stor­age products (Sun Stor­age 7000 Uni­fied Stor­age Sys­tems) today.

From a tech­no­logy point of view, the soft­ware side is noth­ing new to me. Us­ing SSDs for zfs as a read-​/​write-​cache is some­thing we can do (partly) already since at least Sol­aris 10u6 (that is the low­est Sol­aris 10 ver­sion we have in­stalled here, so I can not check quickly if the ZIL can be on a sep­ar­ate disk in pre­vi­ous ver­sions of Sol­aris, but I think we have to wait un­til we up­dated to Sol­aris 10u8 un­til we can have the L2ARC on a sep­ar­ate disk) or in FreeBSD. All other nice ZFS fea­tures avail­able in the Open­Stor­age web in­ter­face are also not sur­pris­ing.

But the demon­stra­tion with the Stor­age Sim­u­lator im­pressed me. The in­ter­ac­tion with Win­dows via CIFS makes the older ver­sion of files in snap­shots avail­able in Win­dows (I as­sume this is the Volume Shadow Copy fea­ture of Win­dows), and the stat­ist­ics avail­able via DTrace in the web in­ter­face are also im­press­ive. All this tech­no­logy seems to be well in­teg­rated into an easy to use pack­age for het­ero­gen­eous en­vir­on­ments. If you would like to setup some­thing like this by hand, you would need to have a lot of know­ledge about a lot of stuff (and in the FreeBSD case, you would prob­ably need to aug­ment the ker­nel with ad­di­tional DTrace probes to be able to get a sim­ilar gran­u­lar­ity of the stat­ist­ics), noth­ing a small com­pany is will­ing to pay.

I know that I can get a lot of in­form­a­tion with DTrace (from time to time I have some free cycles to ex­tend the FreeBSD DTrace im­ple­ment­a­tion with ad­di­tional DTrace probes for the linuxu­lator), but what they did with DTrace in the Open­Stor­age soft­ware is great. If you try to do this at home your­self, you need some time to im­ple­ment some­thing like this (I do not think you can take the DTrace scripts and run them on FreeBSD, this will prob­ably take some weeks un­til it works).

It is also the first time I see this new CIFS im­ple­ment­a­tion from SUN in ZFS life in ac­tion. It looks well done. In­teg­ra­tion with AD looks more easy than do­ing it by hand in Samba (at least from look­ing at the Open­Stor­age web in­ter­face). If we could get this in FreeBSD… it would rock!

The en­tire Open­Stor­age web in­ter­face looks us­able. I think SUN has a product there which al­lows them to enter new mar­kets. A product which they can sell to com­pan­ies which did not buy some­thing from SUN be­fore (even Windows-​only com­pan­ies). I think even those Win­dows ad­mins which never touch a com­mand line in­ter­face (read: the low-​level ones; not com­par­able at all with the really high-​profile Win­dows ad­mins of our cli­ent) could be able to get this up and run­ning.

As it seems at the mo­ment, our cli­ent will get a Sun Stor­age F5100 Flash Ar­ray for tech­no­logy eval­u­ation in the be­gin­ning of next year. Un­for­tu­nately the tech­no­logy looks to easy to handle, so I as­sume I have to take care about more com­plex things when this ma­chine ar­rives… 🙁

Fight­ing with the SUN LDAP server

At work we de­cided to up­date our LDAP in­fra­struc­ture. From SUN Dir­ect­ory Server 5.2 to 6.3(.1). The per­son do­ing this is: me.

We have some re­quire­ments for the ap­plic­a­tions we in­stall, we want them in spe­cific loc­a­tions so that we are able to move them between serv­ers more eas­ily (no need to search all stuff in the en­tire sys­tem, just the gen­eric loc­a­tion and some stuff in /​etc needs to be taken care of… in the best case). SUN of­fers the DSEE 6.3.1 as a pack­age or as a ZIP-​distribution. I de­cided to down­load the ZIP-​distribution, as this im­plies less stuff in non-​conforming places.

The in­stall­a­tion went OK. After the ini­tial hurdles of search­ing the SMF mani­fest ref­er­enced in the docs (a com­mand shall in­stall it) but not find­ing them be­cause the ZIP-​distribution does not con­tain this func­tion­al­ity (I see no tech­nical reason; I in­stalled the mani­fest by hand), I had the new server up, the data im­por­ted, and a work­sta­tion con­figured to use this new server.

The next step was to setup a second server for multi-​master rep­lic­a­tion. The docs for DSEE tell to use the web in­ter­face to con­fig­ure the rep­lic­a­tion (this is pre­ferred over the com­mand line way). I am more a com­mand line guy, but OK, if it is that much re­com­men­ded, I de­cided to give it a try… and the web in­ter­face had to be in­stalled any­way, so that the less com­mand line af­fine people in our team can have a look in case it is needed.

The bad news, it was hard to get the webin­ter­face up and run­ning. In the pack­age dis­tri­bu­tion all this is sup­posed to be very easy, but in the ZIP-​distribution I stumbled over a lot of hurdles. The GUI had to be in­stalled in the java ap­plic­a­tion server by hand in­stead of the more auto­matic way when in­stalled as a pack­age. When fol­low­ing the in­stall­a­tion pro­ced­ure, the ap­plic­a­tion server wants a pass­word to start the web in­ter­face. The pack­age ver­sion al­lows to re­gister it in the sol­aris man­age­ment in­ter­face, the ZIP-​distribution does not (dir­ect ac­cess to it works, off course). Adding a server to the dir­ect­ory server web in­ter­face does not work via the web in­ter­face, I had to re­gister it on the com­mand line. Once it is re­gistered, not everything of the LDAP server is ac­cess­ible, e.g. the er­ror mes­sages and sim­ilar. This may or may not be re­lated to the fact that it is not very clear which programs/​dae­mons/​services have to run, for ex­ample do I need to use the ca­caoadm of the sys­tem, or the one which comes with DSEE? In my tests it looks like they are dif­fer­ent beasts in­de­pend­ent from each other, but I did not try all pos­sible com­bin­a­tions to see if this af­fects the be­ha­vior of the web in­ter­face or not.

All the prob­lems may be doc­u­mented in one or two of the DSEE doc­u­ments, but at least in the in­stall­a­tion doc­u­ment there is not enough doc­u­ment­a­tion re­gard­ing all my ques­tions. Seems I have to read a lot more doc­u­ment­a­tion to get the web in­ter­face run­ning… which is a shame, as the man­age­ment in­ter­face which is sup­posed to make the ad­min­is­tra­tion more easy needs more doc­u­ment­a­tion than the product it is sup­posed to man­age.

Oh, yes, once I had both LDAP serv­ers re­gistered in the web in­ter­face, set­ting up the rep­lic­a­tion was very easy.