Tarsnap usage statistics

The more time pass­es with tarsnap, the more impres­sive it is.

Fol­low­ing is a list of all my pri­vate­ly used sys­tems (2 machines which only host jails – here named Prison1 and Prison2 – and sev­er­al jails – here named accord­ing to their func­tion­al­i­ty) togeth­er with some tarsnap sta­tis­tics. For each back­up tarsnap prints out some sta­tis­tics. The amount of uncom­pressed stor­age space of all archives of this machine, the com­pressed stor­age space of all archives, the unique uncom­pressed stor­age space of all archives, the unique com­pressed stor­age space of all archives, and the same mount of info for the cur­rent archive. The unique stor­age space is after dedu­pli­ca­tion. The most inter­est­ing infor­ma­tion is the unique and com­pressed one. For a spe­cif­ic archive it shows the amount of data which is dif­fer­ent to all oth­er archives, and for the total amount it tells how much stor­age space is used on the tarsnap serv­er. I do not back­up all data in tarsnap. I do a full back­up on exter­nal stor­age (zfs snap­shot + zfs send | zfs receive) once in a while and tarsnap is only for the stuff which could change dai­ly or is very small (my mails belong to the first group, the con­fig of appli­ca­tions or the sys­tem to the sec­ond group). At the end of the post there is also an overview of the mon­ey I have spend so far in tarsnap for the backups.

Atten­tion: the fol­low­ing graphs are dis­play­ing small val­ues in KB, while the text is telling about sizes in MB or even GB!

Prison1

The back­up of one day cov­ers 1.1 GB of uncom­pressed data, the sub­trees I back­up are /etc, /usr/local/etc, /home, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and a sub­ver­sion check­out of /usr/src (exclud­ing the ker­nel com­pile direc­to­ry; I back­up this as I have local mod­i­fi­ca­tions to FreeB­SD). If I want to have all days uncom­pressed on my hard­disk, I would have to pro­vide 10 GB of stor­age space. Com­pressed this comes down to 2.4 GB, unique uncom­pressed this is 853 MB, and unique com­pressed this is 243 MB. The fol­low­ing graph splits this up into all the back­ups I have as of this writ­ting. I only show the unique val­ues, as includ­ing the total val­ues would make the unique val­ues dis­ap­pear in the graph (val­ues too small).

chart


In this graph we see that I have a con­stant rate of new data. I think this is most­ly ref­er­ences to already stored data (/usr/src being the most like­ly cause of this, noth­ing changed in those directories).

Internal-DNS

One day cov­ers 7 MB of uncom­pressed data, all archives take 56 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/named, and /var/db/mergemaster.mtree.

chart


This graph is strange. I have no idea why there is so much data for the sec­ond and the last day. Noth­ing changed.

Outgoing-Postfix

One day cov­ers 8 MB of uncom­pressed data, all archives take 62 MB uncom­pressed, unique and com­pressed this comes down to 1.5 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/spool/postfix, and /var/db/mergemaster.mtree.

chart


This looks not bad. I was send­ing a lot of mails on the 25th. And the days in the mid­dle I was not send­ing much.

IMAP

One day cov­ers about 900 MB of uncom­pressed data, all archives take 7.2 GB uncom­pressed, unique and com­pressed this comes down to 526 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home (mail fold­ers) and /usr/local/share/courier-imap.

chart


Obvi­ous­ly I have a not so small amount of change in my mail­box. As my spam­fil­ter is work­ing nice­ly this is direct­ly cor­re­lat­ed to mails from var­i­ous mail­inglists (most­ly FreeBSD).

MySQL (for the Horde web­mail interface)

One day cov­ers 100 MB of uncom­pressed data, all archives take 801 MB uncom­pressed, unique and com­pressed this comes down to 19 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mysql and /var/db/mergemaster.mtree.

chart


This is cor­re­lat­ed with the use of my web­mail inter­face, and as such is also cor­re­lat­ed with the amount of mails I get and send. Obvi­ous­ly I did not use my web­mail inter­face at the week­end (as the back­up cov­ers the change of the pre­vi­ous day).

Web­mail

One day cov­ers 121 MB of uncom­pressed data, all archives take 973 MB uncom­pressed, unique and com­pressed this comes down to 33 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/horde and /home.

chart


This one is strange again. Noth­ing in the data changed.

Sam­ba

One day cov­ers 10 MB of uncom­pressed data, all archives take 72 MB uncom­pressed, unique and com­pressed this comes down to 1.9 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree and /var/db/samba.

chart


Here we see the changes to /var/db/samba, this should be most­ly my Wii access­ing mul­ti­me­dia files there.

Proxy

One day cov­ers 31 MB of uncom­pressed data, all archives take 223 MB uncom­pressed, unique and com­pressed this comes down to 6.6 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg and /var/db/mergemaster.mtree.

chart


This is also a strange graph. Again, noth­ing changed there (the cache direc­to­ry is not in the backup).

php­MyAd­min

One day cov­ers 44 MB of uncom­pressed data, all archives take 310 uncom­pressed, unique and com­pressed this comes down to 11 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home and /usr/local/www/phpMyAdmin.

chart


And again a strange graph. No changes in the FS.

Gallery

One day cov­ers 120 MB of uncom­pressed data, all archives take 845 MB uncom­pressed, unique and com­pressed this comes down to 25 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/gallery2 and /home/gallery (exclud­ing some parts of /home/gallery).

chart


This one is OK. Friends and Fam­i­ly access­ing the pictures.

Prison2

One day cov­ers 7 MB of uncom­pressed data, all archives take 28 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and /home.

chart


This one looks strange to me again. Same rea­sons as with the pre­vi­ous graphs.

Incoming-Postfix

One day cov­ers 56 MB of uncom­pressed data, all archives take 225 MB uncom­pressed, unique and com­pressed this  comes down to 5.4 MB. This cov­ers /etc, /usr/local/etc, /usr/local/www/postfixadmin, /root/, /var/db/pkg, /var/db/mysql, /var/spool/postfix and /var/db/mergemaster.mtree.

chart


This graph looks OK to me.

Blog-and-XMPP

One day cov­ers 59 MB of uncom­pressed data, all archives take 478 MB uncom­pressed, unique and com­pressed this comes down to 14 MB. This cov­ers /etc, /usr/local/etc, /root, /home, /var/db/pkg, /var/db/mergemaster.mtree, /var/db/mysql and /var/spool/ejabberd (yes, no back­up of the web-data, I have it in anoth­er jail, no need to back­up it again). 

chart


With the MySQL and XMPP data­bas­es in the back­up, I do not think this graph is wrong.

Totals

The total amount of stored data per sys­tem is: 

chart


Costs

Since I use tarsnap (8 days), I have spend 38 cents, most of this is band­width cost for the trans­fer of the ini­tial back­up (29.21 cents). Accord­ing to the graphs, I am cur­rent­ly at about 8 – 14 cents per week (or about half a dol­lar per month) for my back­ups (I still have a machine to add, and this may increase the amount in a sim­i­lar way than the Prison1 sys­tem with 2 – 3 jails). The amount of mon­ey spend in US-cents (round­ed!) per day is: 

chart


ZFS & power-failure: stable

At the week­end there was a power-failure at our disaster-recovery-site. As every­thing should be con­nect­ed to the UPS, this should not have had an impact… unfor­tu­nate­ly the guys respon­si­ble for the cabling seem to have not pro­vid­ed enough pow­er con­nec­tions from the UPS. Result: one of our stor­age sys­tems (all vol­umes in sev­er­al RAID5 vir­tu­al disks) for the test sys­tems lost pow­er, 10 hard­disks switched into failed state when the pow­er was sta­ble again (I was told there where sev­er­al small power-failures that day). After telling the soft­ware to have a look at the dri­ves again, all phys­i­cal disks where accepted.

All vol­umes on one of the vir­tu­al disks where dam­aged (actu­al­ly, one of the vir­tu­al disks was dam­aged) beyond repair and we had to recov­er from backup.

All ZFS based mount­points on the good vir­tu­al disks did not show bad behav­ior (zfs clear + zfs scrub for those which showed check­sum errors to make us feel bet­ter). For the UFS based ones… some caused a pan­ic after reboot and we had to run fsck on them before try­ing a sec­ond boot.

We spend a lot more time to get UFS back online, than get­ting ZFS back online. After this expe­ri­ence it looks like our future Solaris 10u8 installs will be with root on ZFS (our work­sta­tions are already like this, but our servers are still at Solaris 10u6).

EMC^2/Legato Net­work­er 7.5.1.6 status

We updat­ed Net­work­er 7.5.1.4 to 7.5.1.6 as the Networker-Support thought it will fix at least one of our prob­lems (“ghost” vol­umes in the DB). Unfor­tu­nate­ly the update does not fix any bug we see in our environment.

Spe­cial­ly for the “post-command runs 1 minute after pre-command even if the back­up is not finished”-bug this is not sat­is­fy­ing: no con­sis­tent DB back­up where the appli­ca­tion has to be stopped togeth­er with the DB to get a con­sis­tent snap­shot (FS+DB in sync).

SUN Open­Stor­age presentation

At work (client site) SUN made a pre­sen­ta­tion about their Open­Stor­age prod­ucts (Sun Stor­age 7000 Uni­fied Stor­age Sys­tems) today.

From a tech­nol­o­gy point of view, the soft­ware side is noth­ing new to me. Using SSDs for zfs as a read-/write-cache is some­thing we can do (part­ly) already since at least Solaris 10u6 (that is the low­est Solaris 10 ver­sion we have installed here, so I can not check quick­ly if the ZIL can be on a sep­a­rate disk in pre­vi­ous ver­sions of Solaris, but I think we have to wait until we updat­ed to Solaris 10u8 until we can have the L2ARC on a sep­a­rate disk) or in FreeB­SD. All oth­er nice ZFS fea­tures avail­able in the Open­Stor­age web inter­face are also not surprising.

But the demon­stra­tion with the Stor­age Sim­u­la­tor impressed me. The inter­ac­tion with Win­dows via CIFS makes the old­er ver­sion of files in snap­shots avail­able in Win­dows (I assume this is the Vol­ume Shad­ow Copy fea­ture of Win­dows), and the sta­tis­tics avail­able via DTrace in the web inter­face are also impres­sive. All this tech­nol­o­gy seems to be well inte­grat­ed into an easy to use pack­age for het­ero­ge­neous envi­ron­ments. If you would like to set­up some­thing like this by hand, you would need to have a lot of knowl­edge about a lot of stuff (and in the FreeB­SD case, you would prob­a­bly need to aug­ment the ker­nel with addi­tion­al DTrace probes to be able to get a sim­i­lar gran­u­lar­i­ty of the sta­tis­tics), noth­ing a small com­pa­ny is will­ing to pay.

I know that I can get a lot of infor­ma­tion with DTrace (from time to time I have some free cycles to extend the FreeB­SD DTrace imple­men­ta­tion with addi­tion­al DTrace probes for the lin­ux­u­la­tor), but what they did with DTrace in the Open­Stor­age soft­ware is great. If you try to do this at home your­self, you need some time to imple­ment some­thing like this (I do not think you can take the DTrace scripts and run them on FreeB­SD, this will prob­a­bly take some weeks until it works).

It is also the first time I see this new CIFS imple­men­ta­tion from SUN in ZFS life in action. It looks well done. Inte­gra­tion with AD looks more easy than doing it by hand in Sam­ba (at least from look­ing at the Open­Stor­age web inter­face). If we could get this in FreeB­SD… it would rock!

The entire Open­Stor­age web inter­face looks usable. I think SUN has a prod­uct there which allows them to enter new mar­kets. A prod­uct which they can sell to com­pa­nies which did not buy some­thing from SUN before (even Windows-only com­pa­nies). I think even those Win­dows admins which nev­er touch a com­mand line inter­face (read: the low-level ones; not com­pa­ra­ble at all with the real­ly high-profile Win­dows admins of our client) could be able to get this up and running.

As it seems at the moment, our client will get a Sun Stor­age F5100 Flash Array for tech­nol­o­gy eval­u­a­tion in the begin­ning of next year. Unfor­tu­nate­ly the tech­nol­o­gy looks to easy to han­dle, so I assume I have to take care about more com­plex things when this machine arrives… 🙁

Fight­ing with the SUN LDAP server

At work we decid­ed to update our LDAP infra­struc­ture. From SUN Direc­to­ry Serv­er 5.2 to 6.3(.1). The per­son doing this is: me.

We have some require­ments for the appli­ca­tions we install, we want them in spe­cif­ic loca­tions so that we are able to move them between servers more eas­i­ly (no need to search all stuff in the entire sys­tem, just the gener­ic loca­tion and some stuff in /etc needs to be tak­en care of… in the best case). SUN offers the DSEE 6.3.1 as a pack­age or as a ZIP-distribution. I decid­ed to down­load the ZIP-distribution, as this implies less stuff in non-conforming places.

The instal­la­tion went OK. After the ini­tial hur­dles of search­ing the SMF man­i­fest ref­er­enced in the docs (a com­mand shall install it) but not find­ing them because the ZIP-distribution does not con­tain this func­tion­al­i­ty (I see no tech­ni­cal rea­son; I installed the man­i­fest by hand), I had the new serv­er up, the data import­ed, and a work­sta­tion con­fig­ured to use this new server.

The next step was to set­up a sec­ond serv­er for multi-master repli­ca­tion. The docs for DSEE tell to use the web inter­face to con­fig­ure the repli­ca­tion (this is pre­ferred over the com­mand line way). I am more a com­mand line guy, but OK, if it is that much rec­om­mend­ed, I decid­ed to give it a try… and the web inter­face had to be installed any­way, so that the less com­mand line affine peo­ple in our team can have a look in case it is needed.

The bad news, it was hard to get the webin­ter­face up and run­ning. In the pack­age dis­tri­b­u­tion all this is sup­posed to be very easy, but in the ZIP-distribution I stum­bled over a lot of hur­dles. The GUI had to be installed in the java appli­ca­tion serv­er by hand instead of the more auto­mat­ic way when installed as a pack­age. When fol­low­ing the instal­la­tion pro­ce­dure, the appli­ca­tion serv­er wants a pass­word to start the web inter­face. The pack­age ver­sion allows to reg­is­ter it in the solaris man­age­ment inter­face, the ZIP-distribution does not (direct access to it works, off course). Adding a serv­er to the direc­to­ry serv­er web inter­face does not work via the web inter­face, I had to reg­is­ter it on the com­mand line. Once it is reg­is­tered, not every­thing of the LDAP serv­er is acces­si­ble, e.g. the error mes­sages and sim­i­lar. This may or may not be relat­ed to the fact that it is not very clear which programs/daemons/services have to run, for exam­ple do I need to use the cacaoadm of the sys­tem, or the one which comes with DSEE? In my tests it looks like they are dif­fer­ent beasts inde­pen­dent from each oth­er, but I did not try all pos­si­ble com­bi­na­tions to see if this affects the behav­ior of the web inter­face or not.

All the prob­lems may be doc­u­ment­ed in one or two of the DSEE doc­u­ments, but at least in the instal­la­tion doc­u­ment there is not enough doc­u­men­ta­tion regard­ing all my ques­tions. Seems I have to read a lot more doc­u­men­ta­tion to get the web inter­face run­ning… which is a shame, as the man­age­ment inter­face which is sup­posed to make the admin­is­tra­tion more easy needs more doc­u­men­ta­tion than the prod­uct it is sup­posed to manage.

Oh, yes, once I had both LDAP servers reg­is­tered in the web inter­face, set­ting up the repli­ca­tion was very easy.