Alexander Leidinger

Just another weblog

Nov
25

Tarsnap usage statistics

The more time passes with tarsnap, the more impres­sive it is.

Fol­low­ing is a list of all my pri­vately used sys­tems (2 machines which only host jails — here named Prison1 and Prison2 — and sev­eral jails — here named accord­ing to their func­tion­al­ity) together with some tarsnap sta­tis­tics. For each backup tarsnap prints out some sta­tis­tics. The amount of uncom­pressed stor­age space of all archives of this machine, the com­pressed stor­age space of all archives, the unique uncom­pressed stor­age space of all archives, the unique com­pressed stor­age space of all archives, and the same mount of info for the cur­rent archive. The unique stor­age space is after dedu­pli­ca­tion. The most inter­est­ing infor­ma­tion is the unique and com­pressed one. For a spe­cific archive it shows the amount of data which is dif­fer­ent to all other archives, and for the total amount it tells how much stor­age space is used on the tarsnap server. I do not backup all data in tarsnap. I do a full backup on exter­nal stor­age (zfs snap­shot + zfs send | zfs receive) once in a while and tarsnap is only for the stuff which could change daily or is very small (my mails belong to the first group, the con­fig of appli­ca­tions or the sys­tem to the sec­ond group). At the end of the post there is also an overview of the money I have spend so far in tarsnap for the backups.

Atten­tion: the fol­low­ing graphs are dis­play­ing small val­ues in KB, while the text is telling about sizes in MB or even GB!

Prison1

The backup of one day cov­ers 1.1 GB of uncom­pressed data, the sub­trees I backup are /etc, /usr/local/etc, /home, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and a sub­ver­sion check­out of /usr/src (exclud­ing the ker­nel com­pile direc­tory; I backup this as I have local mod­i­fi­ca­tions to FreeBSD). If I want to have all days uncom­pressed on my hard­disk, I would have to pro­vide 10 GB of stor­age space. Com­pressed this comes down to 2.4 GB, unique uncom­pressed this is 853 MB, and unique com­pressed this is 243 MB. The fol­low­ing graph splits this up into all the back­ups I have as of this writ­ting. I only show the unique val­ues, as includ­ing the total val­ues would make the unique val­ues dis­ap­pear in the graph (val­ues too small).

chart


In this graph we see that I have a con­stant rate of new data. I think this is mostly ref­er­ences to already stored data (/usr/src being the most likely cause of this, noth­ing changed in those directories).

Internal-DNS

One day cov­ers 7 MB of uncom­pressed data, all archives take 56 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/named, and /var/db/mergemaster.mtree.

chart


This graph is strange. I have no idea why there is so much data for the sec­ond and the last day. Noth­ing changed.

Outgoing-Postfix

One day cov­ers 8 MB of uncom­pressed data, all archives take 62 MB uncom­pressed, unique and com­pressed this comes down to 1.5 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/spool/postfix, and /var/db/mergemaster.mtree.

chart


This looks not bad. I was send­ing a lot of mails on the 25th. And the days in the mid­dle I was not send­ing much.

IMAP

One day cov­ers about 900 MB of uncom­pressed data, all archives take 7.2 GB uncom­pressed, unique and com­pressed this comes down to 526 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home (mail fold­ers) and /usr/local/share/courier-imap.

chart


Obvi­ously I have a not so small amount of change in my mail­box. As my spam­fil­ter is work­ing nicely this is directly cor­re­lated to mails from var­i­ous mail­inglists (mostly FreeBSD).

MySQL (for the Horde web­mail interface)

One day cov­ers 100 MB of uncom­pressed data, all archives take 801 MB uncom­pressed, unique and com­pressed this comes down to 19 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mysql and /var/db/mergemaster.mtree.

chart


This is cor­re­lated with the use of my web­mail inter­face, and as such is also cor­re­lated with the amount of mails I get and send. Obvi­ously I did not use my web­mail inter­face at the week­end (as the backup cov­ers the change of the pre­vi­ous day).

Web­mail

One day cov­ers 121 MB of uncom­pressed data, all archives take 973 MB uncom­pressed, unique and com­pressed this comes down to 33 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/horde and /home.

chart


This one is strange again. Noth­ing in the data changed.

Samba

One day cov­ers 10 MB of uncom­pressed data, all archives take 72 MB uncom­pressed, unique and com­pressed this comes down to 1.9 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree and /var/db/samba.

chart


Here we see the changes to /var/db/samba, this should be mostly my Wii access­ing mul­ti­me­dia files there.

Proxy

One day cov­ers 31 MB of uncom­pressed data, all archives take 223 MB uncom­pressed, unique and com­pressed this comes down to 6.6 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg and /var/db/mergemaster.mtree.

chart


This is also a strange graph. Again, noth­ing changed there (the cache direc­tory is not in the backup).

php­MyAd­min

One day cov­ers 44 MB of uncom­pressed data, all archives take 310 uncom­pressed, unique and com­pressed this comes down to 11 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home and /usr/local/www/phpMyAdmin.

chart


And again a strange graph. No changes in the FS.

Gallery

One day cov­ers 120 MB of uncom­pressed data, all archives take 845 MB uncom­pressed, unique and com­pressed this comes down to 25 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/gallery2 and /home/gallery (exclud­ing some parts of /home/gallery).

chart


This one is OK. Friends and Fam­ily access­ing the pictures.

Prison2

One day cov­ers 7 MB of uncom­pressed data, all archives take 28 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and /home.

chart


This one looks strange to me again. Same rea­sons as with the pre­vi­ous graphs.

Incoming-Postfix

One day cov­ers 56 MB of uncom­pressed data, all archives take 225 MB uncom­pressed, unique and com­pressed this  comes down to 5.4 MB. This cov­ers /etc, /usr/local/etc, /usr/local/www/postfixadmin, /root/, /var/db/pkg, /var/db/mysql, /var/spool/postfix and /var/db/mergemaster.mtree.

chart


This graph looks OK to me.

Blog-and-XMPP

One day cov­ers 59 MB of uncom­pressed data, all archives take 478 MB uncom­pressed, unique and com­pressed this comes down to 14 MB. This cov­ers /etc, /usr/local/etc, /root, /home, /var/db/pkg, /var/db/mergemaster.mtree, /var/db/mysql and /var/spool/ejabberd (yes, no backup of the web-data, I have it in another jail, no need to backup it again).

chart


With the MySQL and XMPP data­bases in the backup, I do not think this graph is wrong.

Totals

The total amount of stored data per sys­tem is:

chart


Costs

Since I use tarsnap (8 days), I have spend 38 cents, most of this is band­width cost for the trans­fer of the ini­tial backup (29.21 cents). Accord­ing to the graphs, I am cur­rently at about 8 – 14 cents per week (or about half a dol­lar per month) for my back­ups (I still have a machine to add, and this may increase the amount in a sim­i­lar way than the Prison1 sys­tem with 2 – 3 jails). The amount of money spend in US-cents (rounded!) per day is:

chart


GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share/Save

Tags: , , , , , , , , ,
Nov
24

ZFS & power-failure: stable

At the week­end there was a power-failure at our disaster-recovery-site. As every­thing should be con­nected to the UPS, this should not have had an impact… unfor­tu­nately the guys respon­si­ble for the cabling seem to have not pro­vided enough power con­nec­tions from the UPS. Result: one of our stor­age sys­tems (all vol­umes in sev­eral RAID5 vir­tual disks) for the test sys­tems lost power, 10 hard­disks switched into failed state when the power was sta­ble again (I was told there where sev­eral small power-failures that day). After telling the soft­ware to have a look at the dri­ves again, all phys­i­cal disks where accepted.

All vol­umes on one of the vir­tual disks where dam­aged (actu­ally, one of the vir­tual disks was dam­aged) beyond repair and we had to recover from backup.

All ZFS based mount­points on the good vir­tual disks did not show bad behav­ior (zfs clear + zfs scrub for those which showed check­sum errors to make us feel bet­ter). For the UFS based ones… some caused a panic after reboot and we had to run fsck on them before try­ing a sec­ond boot.

We spend a lot more time to get UFS back online, than get­ting ZFS back online. After this expe­ri­ence it looks like our future Solaris 10u8 installs will be with root on ZFS (our work­sta­tions are already like this, but our servers are still at Solaris 10u6).

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…

Tags: , , , , , , , , ,
Nov
24

EMC^2/Legato Net­worker 7.5.1.6 status

We updated Net­worker 7.5.1.4 to 7.5.1.6 as the Networker-Support thought it will fix at least one of our prob­lems (“ghost” vol­umes in the DB). Unfor­tu­nately the update does not fix any bug we see in our environment.

Spe­cially for the “post-command runs 1 minute after pre-command even if the backup is not finished”-bug this is not sat­is­fy­ing: no con­sis­tent DB backup where the appli­ca­tion has to be stopped together with the DB to get a con­sis­tent snap­shot (FS+DB in sync).

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…

Tags: , , , ,
Nov
19

SUN Open­Stor­age presentation

At work (client site) SUN made a pre­sen­ta­tion about their Open­Stor­age prod­ucts (Sun Stor­age 7000 Uni­fied Stor­age Sys­tems) today.

From a tech­nol­ogy point of view, the soft­ware side is noth­ing new to me. Using SSDs for zfs as a read-/write-cache is some­thing we can do (partly) already since at least Solaris 10u6 (that is the low­est Solaris 10 ver­sion we have installed here, so I can not check quickly if the ZIL can be on a sep­a­rate disk in pre­vi­ous ver­sions of Solaris, but I think we have to wait until we updated to Solaris 10u8 until we can have the L2ARC on a sep­a­rate disk) or in FreeBSD. All other nice ZFS fea­tures avail­able in the Open­Stor­age web inter­face are also not surprising.

But the demon­stra­tion with the Stor­age Sim­u­la­tor impressed me. The inter­ac­tion with Win­dows via CIFS makes the older ver­sion of files in snap­shots avail­able in Win­dows (I assume this is the Vol­ume Shadow Copy fea­ture of Win­dows), and the sta­tis­tics avail­able via DTrace in the web inter­face are also impres­sive. All this tech­nol­ogy seems to be well inte­grated into an easy to use pack­age for het­ero­ge­neous envi­ron­ments. If you would like to setup some­thing like this by hand, you would need to have a lot of knowl­edge about a lot of stuff (and in the FreeBSD case, you would prob­a­bly need to aug­ment the ker­nel with addi­tional DTrace probes to be able to get a sim­i­lar gran­u­lar­ity of the sta­tis­tics), noth­ing a small com­pany is will­ing to pay.

I know that I can get a lot of infor­ma­tion with DTrace (from time to time I have some free cycles to extend the FreeBSD DTrace imple­men­ta­tion with addi­tional DTrace probes for the lin­ux­u­la­tor), but what they did with DTrace in the Open­Stor­age soft­ware is great. If you try to do this at home your­self, you need some time to imple­ment some­thing like this (I do not think you can take the DTrace scripts and run them on FreeBSD, this will prob­a­bly take some weeks until it works).

It is also the first time I see this new CIFS imple­men­ta­tion from SUN in ZFS life in action. It looks well done. Inte­gra­tion with AD looks more easy than doing it by hand in Samba (at least from look­ing at the Open­Stor­age web inter­face). If we could get this in FreeBSD… it would rock!

The entire Open­Stor­age web inter­face looks usable. I think SUN has a prod­uct there which allows them to enter new mar­kets. A prod­uct which they can sell to com­pa­nies which did not buy some­thing from SUN before (even Windows-only com­pa­nies). I think even those Win­dows admins which never touch a com­mand line inter­face (read: the low-level ones; not com­pa­ra­ble at all with the really high-profile Win­dows admins of our client) could be able to get this up and running.

As it seems at the moment, our client will get a Sun Stor­age F5100 Flash Array for tech­nol­ogy eval­u­a­tion in the begin­ning of next year. Unfor­tu­nately the tech­nol­ogy looks to easy to han­dle, so I assume I have to take care about more com­plex things when this machine arrives… :(

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…

Tags: , , , , , , , , ,
Nov
19

Fight­ing with the SUN LDAP server

At work we decided to update our LDAP infra­struc­ture. From SUN Direc­tory Server 5.2 to 6.3(.1). The per­son doing this is: me.

We have some require­ments for the appli­ca­tions we install, we want them in spe­cific loca­tions so that we are able to move them between servers more eas­ily (no need to search all stuff in the entire sys­tem, just the generic loca­tion and some stuff in /etc needs to be taken care of… in the best case). SUN offers the DSEE 6.3.1 as a pack­age or as a ZIP-distribution. I decided to down­load the ZIP-distribution, as this implies less stuff in non-conforming places.

The instal­la­tion went OK. After the ini­tial hur­dles of search­ing the SMF man­i­fest ref­er­enced in the docs (a com­mand shall install it) but not find­ing them because the ZIP-distribution does not con­tain this func­tion­al­ity (I see no tech­ni­cal rea­son; I installed the man­i­fest by hand), I had the new server up, the data imported, and a work­sta­tion con­fig­ured to use this new server.

The next step was to setup a sec­ond server for multi-master repli­ca­tion. The docs for DSEE tell to use the web inter­face to con­fig­ure the repli­ca­tion (this is pre­ferred over the com­mand line way). I am more a com­mand line guy, but OK, if it is that much rec­om­mended, I decided to give it a try… and the web inter­face had to be installed any­way, so that the less com­mand line affine peo­ple in our team can have a look in case it is needed.

The bad news, it was hard to get the webin­ter­face up and run­ning. In the pack­age dis­tri­b­u­tion all this is sup­posed to be very easy, but in the ZIP-distribution I stum­bled over a lot of hur­dles. The GUI had to be installed in the java appli­ca­tion server by hand instead of the more auto­matic way when installed as a pack­age. When fol­low­ing the instal­la­tion pro­ce­dure, the appli­ca­tion server wants a pass­word to start the web inter­face. The pack­age ver­sion allows to reg­is­ter it in the solaris man­age­ment inter­face, the ZIP-distribution does not (direct access to it works, off course). Adding a server to the direc­tory server web inter­face does not work via the web inter­face, I had to reg­is­ter it on the com­mand line. Once it is reg­is­tered, not every­thing of the LDAP server is acces­si­ble, e.g. the error mes­sages and sim­i­lar. This may or may not be related to the fact that it is not very clear which programs/daemons/services have to run, for exam­ple do I need to use the cacaoadm of the sys­tem, or the one which comes with DSEE? In my tests it looks like they are dif­fer­ent beasts inde­pen­dent from each other, but I did not try all pos­si­ble com­bi­na­tions to see if this affects the behav­ior of the web inter­face or not.

All the prob­lems may be doc­u­mented in one or two of the DSEE doc­u­ments, but at least in the instal­la­tion doc­u­ment there is not enough doc­u­men­ta­tion regard­ing all my ques­tions. Seems I have to read a lot more doc­u­men­ta­tion to get the web inter­face run­ning… which is a shame, as the man­age­ment inter­face which is sup­posed to make the admin­is­tra­tion more easy needs more doc­u­men­ta­tion than the prod­uct it is sup­posed to manage.

Oh, yes, once I had both LDAP servers reg­is­tered in the web inter­face, set­ting up the repli­ca­tion was very easy.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…

Tags: , , , , , , , , ,