Sock­ets and nullfs: works now in ‑cur­rent

I just updat­ed to a recent -cur­rent and tried the new nullfs. Sock­ets (e.g. the MySQL one) work now with nullfs. No need to have e.g. jails on the same FS and hardlink the sock­et to not need to use TCP in MySQL (or an IP at all for the jail).

Great work!

Send to Kin­dle

How I set­up a Jail-Host

Every­one has his own way of set­ting up a machine to serve as a host of mul­ti­ple jails. Here is my way, YMMV.

Ini­tial FreeB­SD install

I use sev­er­al hard­disks in a Soft­ware-RAID set­up. It does not mat­ter much if you set them up with one big par­ti­tion or with sev­er­al par­ti­tions, feel free to fol­low your pref­er­ences here. My way of par­ti­tion­ing the hard­disks is described in a pre­vi­ous post. That post only shows the com­mands to split the hard­disks into two par­ti­tions and use ZFS for the rootfs. The com­mands to ini­tial­ize the ZFS data par­ti­tion are not described, but you should be able to fig­ure it out your­self (and you can decide on your own what kind of RAID lev­el you want to use). For this FS I set atime, exec and setu­id to off in the ZFS options.

On the ZFS data par­ti­tion I cre­ate a new dataset for the sys­tem. For this dataset I set atime, exec and setu­id to off in the ZFS options. Inside this dataset I cre­ate datasets for /home, /usr/compat, /usr/local, /usr/obj, /usr/ports/, /usr/src, /usr/sup and /var/ports. There are two ways of doing this. One way is to set the ZFS mount­point. The way I pre­fer is to set rel­a­tive sym­links to it, e.g. “cd /usr; ln ‑s ../data/system/usr_obj obj”. I do this because this way I can tem­po­rary import the pool on anoth­er machine (e.g. my desk­top, if the need aris­es) with­out fear to inter­fere with the sys­tem. The ZFS options are set as fol­lows:

ZFS options for data/system/*




The exec option for home is not nec­es­sary if you keep sep­a­rate datasets for each user. Nor­mal­ly I keep sep­a­rate datasets for home direc­to­ries, but Jail-Hosts should not have users (except the admins, but they should not keep data in their homes), so I just cre­ate a sin­gle home dataset. The setu­id option for the usr_ports should not be nec­es­sary if you redi­rect the build direc­to­ry of the ports to a dif­fer­ent place (WRKDIRPREFIX in /etc/make.conf).

Installing ports

The ports I install by default are net/rsync, ports-mgmt/portaudit, ports-mgmt/portmaster, shells/zsh, sysutils/bsdstats, sysutils/ezjail, sysutils/smartmontools and sysutils/tmux.

Basic set­up

In the crontab of root I set­up a job to do a port­snap update once a day (I pick a ran­dom num­ber between 0 and 59 for the minute, but keep a fixed hour). I also have http_proxy spec­i­fied in /etc/profile, so that all machines in this net­work do not down­load every­thing from far away again and again, but can get the data from the local caching proxy. As a lit­tle watch­dog I have a lit­tle @reboot rule in the crontab, which noti­fies me when a machine reboots:

@reboot grep "kernel boot file is" /var/log/messages | mail -s "`hostname` rebooted" root >/dev/null 2>&1

This does not replace a real mon­i­tor­ing solu­tion, but in cas­es where real mon­i­tor­ing is overkill it pro­vides a nice HEADS-UP (and shows you direct­ly which ker­nel is loaded in case a non-default one is used).

Some default alias­es I use every­where are:

alias portmlist="portmaster -L | egrep -B1 '(ew|ort) version|Aborting|installed|dependencies|IGNORE|marked|Reason:|MOVED|deleted|exist|update' | grep -v '^--'"
alias portmclean="portmaster -t --clean-distfiles --clean-packages"
alias portmcheck="portmaster -y --check-depends"

Addi­tion­al devfs rules for Jails

I have the need to give access to some spe­cif­ic devices in some jails. For this I need to set­up a cus­tom /etc/devfs.rules file. The files con­tains some ID num­bers which need to be unique in the sys­tem. On a 9‑current sys­tem the num­bers one to four are already used (see /etc/defaults/devfs.rules). The next avail­able num­ber is obvi­ous­ly five then. First I present my devfs.rules entries, then I explain them:

add path 'audio*' unhide
add path 'dsp*' unhide
add path midistat unhide
add path 'mixer*' unhide
add path 'music*' unhide
add path 'sequencer*' unhide
add path sndstat unhide
add path speaker unhide

add path 'lpt*' unhide
add path 'ulpt*' unhide user 193 group 193
add path 'unlpt*' unhide user 193 group 193

add path zfs unhide

add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add include $devfsrules_unhide_printers
add include $devfsrules_unhide_zfs

add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add include $devfsrules_unhide_zfs

The devfs_rules_unhide_XXX ones give access to spe­cif­ic devices, e.g. all the sound relat­ed devices or to local print­ers. The devfsrules_jail_XXX ones com­bine all the unhide rules for spe­cif­ic jail setups. Unfor­tu­nate­ly the include direc­tive is not recur­sive, so that we can not include the default devfsrules_jail pro­file and need to repli­cate its con­tents. The first three includes of each devfsrules_jail_XXX accom­plish this. The unhide_zfs rule gives access to /dev/zfs, which is need­ed if you attach one or more ZFS datasets to a jail. I will explain how to use those pro­files with ezjail in a follow-up post.

Jails set­up

I use ezjail to man­age jails, it is more com­fort­able than doing it by hand while at the same time allows me to do some­thing by hand. My jails nor­mal­ly reside inside ZFS datasets, for this rea­son I have set­up a spe­cial area (ZFS dataset data/jails) which is han­dled by ezjail.The cor­re­spond­ing ezjail.conf set­tings are:


I also dis­abled procfs and fde­scfs in jails (but they can be enabled lat­er for spe­cif­ic jails if nec­es­sary).

Unfor­tu­nate­ly ezjail (as of v3.1) sets the mount­point of a new­ly cre­at­ed dataset even if it is not nec­es­sary. For this rea­son I always issue a “zfs inher­it mount­point ” after cre­at­ing a jail. This sim­pli­fies the case where you want to move/rename a dataset and want to have the mount­point autom­cat­i­cal­ly fol­low the change.

The access flags of  /data/jails direc­to­ry are 700, this pre­vents local users (there should be none, but bet­ter safe than sor­ry) to get access to files from users in jails with the same UID.

After the first create/update of the ezjail base­jail the ZFS options of base­jail (data/jails/basejail) and new­jail (data/jails/newjail) need to be changed. For both exec and setu­id should be changed to “on” The same needs to be done after cre­at­ing a new jail for the new jail (before start­ing it).

The default ezjail flavour

In my default ezjail flavour I cre­ate some default user(s) with a basesystem-shell (via /data/jails/flavours/mydef/ezjail.flavour) before the pack­age install, and change the shell to my pre­ferred zsh after­wards (this is only valid if the jails are used only by in-house peo­ple, if you want to offer light­weight vir­tu­al machines to (unknown) cus­tomers, the default user(s) and shell(s) are obvi­ous­ly up to dis­cus­sion). At the end I also run a “/usr/local/sbin/portmaster ‑y –check-depends” to make sure every­thing is in a sane state.

For the pack­ages (/data/jails/flavours/mydef/pkg/) I add sym­links to the unver­sioned pack­ages I want to install. I have the pack­ages in a com­mon (think about set­ting PACKAGES in make.conf and using PACKAGES/Latest/XYZ.tbz) direc­to­ry (if they can be shared over var­i­ous flavours), and they are unver­sioned so that I do not have to update the ver­sion num­ber each time there is an update. The pack­ages I install by default are bsd­stats, por­tau­dit, port­mas­ter, zsh, tmux and all their depen­den­cies.

In case you use jails to vir­tu­al­ize ser­vices and con­sol­i­date servers (e.g. DNS, HTTP, MySQL each in a sep­a­rate jail) instead of pro­vid­ing light­weight vir­tu­al machines to (unknown) cus­tomers, there is also a ben­e­fit of shar­ing the dis­t­files and pack­ages between jails on the same machine. To do this I cre­ate /data/jails/flavours/mydef/shared/ports/{distfiles,packages} which are then mount­ed via nullfs or NFS into all the jails from a com­mon direc­to­ry. This requires the fol­low­ing vari­ables in /data/jails/flavours/mydef/etc/make.conf (I also keep the pack­ages for dif­fer­ent CPU types and com­pil­ers in the same sub­tree, if you do not care, just remove the “/${CC}/${CPUTYPE}” from the PACAKGES line):

DISTDIR=  /shared/ports/distfiles
PACKAGES= /shared/ports/packages/${CC}/${CPUTYPE}

New jails

A future post will cov­er how I set­up new jails in such a set­up and how I cus­tomize the start order of jails or use some non-default set­tings for the jail-startup.

Send to Kin­dle

Sta­bi­liz­ing 7‑stable…

The 7-sta­ble sys­tem on which I have sta­bil­i­ty prob­lems after an update from 7.1 to 7.2/7‑stable is now semi-stable.

The watch­dog reboots after one minute of no reac­tion (cur­rent­ly it is able to run 3 – 4 hours), and the jails come up with­out prob­lems now.

The prob­lem with the jails was, that e.g. the mysql-serv­er start­up went into the STOP state because TTY-input was “request­ed”. I solved the prob­lem by using /dev/null as input on jail-startup. On -cur­rent I do not see this behav­ior (I have a 9‑current sys­tem with a lot of jails which reboots every X days, and there mysql does not go into the STOP state).

I also start the jails in the back­ground, so that one block­ing jail does not block every­thing (done like in ‑cur­rent).

To say this with code:

--- /usr/src/etc/rc.d/jail      2009-02-07 15:04:35.000000000 +0100
+++ /etc/rc.d/jail      2009-12-16 17:03:12.000000000 +0100
@@ -556,7 +556,8 @@
 eval ${_setfib} jail ${_flags} -i ${_rootdir} ${_hostname} \
-                       \\"${_addrl}\\" ${_exec_start} > ${_tmp_jail} 2>&1
+                       \\"${_addrl}\\" ${_exec_start} > ${_tmp_jail} 2>&1 \\
+                       </dev/null

 if [ "$?" -eq 0 ] ; then
 _jail_id=$(head -1 ${_tmp_jail})
@@ -623,4 +624,4 @@
 if [ -n "$*" ]; then
-run_rc_command "${cmd}"
+run_rc_command "${cmd}" &

I also iden­ti­fied 57 patch­es for ZFS which are in 8‑stable, but not in 7‑stable (I do not think they could solve the dead­lock, but I do not real­ly know, and now that there is one FS on ZFS, I would like to get as much fixed as pos­si­ble). Some of them should be merged, some would be nice to merge, and some I do not care much about (but if they are easy to merge, why not…). I already have all revi­sions and the cor­re­spond­ing com­mit logs avail­able in an email-draft.

Now I just need to write a lit­tle bit of text and find some peo­ple will­ing to help (some of the changes need a review if they are applic­a­ble to 7‑stable, and every­thing should be test­ed on a scratch-box).

Send to Kin­dle

Tarsnap usage sta­tis­tics

The more time pass­es with tarsnap, the more impres­sive it is.

Fol­low­ing is a list of all my pri­vate­ly used sys­tems (2 machines which only host jails – here named Prison1 and Prison2 – and sev­er­al jails – here named accord­ing to their func­tion­al­i­ty) togeth­er with some tarsnap sta­tis­tics. For each back­up tarsnap prints out some sta­tis­tics. The amount of uncom­pressed stor­age space of all archives of this machine, the com­pressed stor­age space of all archives, the unique uncom­pressed stor­age space of all archives, the unique com­pressed stor­age space of all archives, and the same mount of info for the cur­rent archive. The unique stor­age space is after dedu­pli­ca­tion. The most inter­est­ing infor­ma­tion is the unique and com­pressed one. For a spe­cif­ic archive it shows the amount of data which is dif­fer­ent to all oth­er archives, and for the total amount it tells how much stor­age space is used on the tarsnap serv­er. I do not back­up all data in tarsnap. I do a full back­up on exter­nal stor­age (zfs snap­shot + zfs send | zfs receive) once in a while and tarsnap is only for the stuff which could change dai­ly or is very small (my mails belong to the first group, the con­fig of appli­ca­tions or the sys­tem to the sec­ond group). At the end of the post there is also an overview of the mon­ey I have spend so far in tarsnap for the back­ups.

Atten­tion: the fol­low­ing graphs are dis­play­ing small val­ues in KB, while the text is telling about sizes in MB or even GB!


The back­up of one day cov­ers 1.1 GB of uncom­pressed data, the sub­trees I back­up are /etc, /usr/local/etc, /home, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and a sub­ver­sion check­out of /usr/src (exclud­ing the ker­nel com­pile direc­to­ry; I back­up this as I have local mod­i­fi­ca­tions to FreeB­SD). If I want to have all days uncom­pressed on my hard­disk, I would have to pro­vide 10 GB of stor­age space. Com­pressed this comes down to 2.4 GB, unique uncom­pressed this is 853 MB, and unique com­pressed this is 243 MB. The fol­low­ing graph splits this up into all the back­ups I have as of this writ­ting. I only show the unique val­ues, as includ­ing the total val­ues would make the unique val­ues dis­ap­pear in the graph (val­ues too small).


In this graph we see that I have a con­stant rate of new data. I think this is most­ly ref­er­ences to already stored data (/usr/src being the most like­ly cause of this, noth­ing changed in those direc­to­ries).


One day cov­ers 7 MB of uncom­pressed data, all archives take 56 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/named, and /var/db/mergemaster.mtree.


This graph is strange. I have no idea why there is so much data for the sec­ond and the last day. Noth­ing changed.


One day cov­ers 8 MB of uncom­pressed data, all archives take 62 MB uncom­pressed, unique and com­pressed this comes down to 1.5 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/spool/postfix, and /var/db/mergemaster.mtree.


This looks not bad. I was send­ing a lot of mails on the 25th. And the days in the mid­dle I was not send­ing much.


One day cov­ers about 900 MB of uncom­pressed data, all archives take 7.2 GB uncom­pressed, unique and com­pressed this comes down to 526 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home (mail fold­ers) and /usr/local/share/courier-imap.


Obvi­ous­ly I have a not so small amount of change in my mail­box. As my spam­fil­ter is work­ing nice­ly this is direct­ly cor­re­lat­ed to mails from var­i­ous mail­inglists (most­ly FreeB­SD).

MySQL (for the Horde web­mail inter­face)

One day cov­ers 100 MB of uncom­pressed data, all archives take 801 MB uncom­pressed, unique and com­pressed this comes down to 19 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mysql and /var/db/mergemaster.mtree.


This is cor­re­lat­ed with the use of my web­mail inter­face, and as such is also cor­re­lat­ed with the amount of mails I get and send. Obvi­ous­ly I did not use my web­mail inter­face at the week­end (as the back­up cov­ers the change of the pre­vi­ous day).


One day cov­ers 121 MB of uncom­pressed data, all archives take 973 MB uncom­pressed, unique and com­pressed this comes down to 33 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/horde and /home.


This one is strange again. Noth­ing in the data changed.


One day cov­ers 10 MB of uncom­pressed data, all archives take 72 MB uncom­pressed, unique and com­pressed this comes down to 1.9 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree and /var/db/samba.


Here we see the changes to /var/db/samba, this should be most­ly my Wii access­ing mul­ti­me­dia files there.


One day cov­ers 31 MB of uncom­pressed data, all archives take 223 MB uncom­pressed, unique and com­pressed this comes down to 6.6 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg and /var/db/mergemaster.mtree.


This is also a strange graph. Again, noth­ing changed there (the cache direc­to­ry is not in the back­up).


One day cov­ers 44 MB of uncom­pressed data, all archives take 310 uncom­pressed, unique and com­pressed this comes down to 11 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /home and /usr/local/www/phpMyAdmin.


And again a strange graph. No changes in the FS.


One day cov­ers 120 MB of uncom­pressed data, all archives take 845 MB uncom­pressed, unique and com­pressed this comes down to 25 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /usr/local/www/gallery2 and /home/gallery (exclud­ing some parts of /home/gallery).


This one is OK. Friends and Fam­i­ly access­ing the pic­tures.


One day cov­ers 7 MB of uncom­pressed data, all archives take 28 MB uncom­pressed, unique and com­pressed this comes down to 1.3 MB. This cov­ers /etc, /usr/local/etc, /root, /var/db/pkg, /var/db/mergemaster.mtree, /space/jails/flavours and /home.


This one looks strange to me again. Same rea­sons as with the pre­vi­ous graphs.


One day cov­ers 56 MB of uncom­pressed data, all archives take 225 MB uncom­pressed, unique and com­pressed this  comes down to 5.4 MB. This cov­ers /etc, /usr/local/etc, /usr/local/www/postfixadmin, /root/, /var/db/pkg, /var/db/mysql, /var/spool/postfix and /var/db/mergemaster.mtree.


This graph looks OK to me.


One day cov­ers 59 MB of uncom­pressed data, all archives take 478 MB uncom­pressed, unique and com­pressed this comes down to 14 MB. This cov­ers /etc, /usr/local/etc, /root, /home, /var/db/pkg, /var/db/mergemaster.mtree, /var/db/mysql and /var/spool/ejabberd (yes, no back­up of the web-data, I have it in anoth­er jail, no need to back­up it again).


With the MySQL and XMPP data­bas­es in the back­up, I do not think this graph is wrong.


The total amount of stored data per sys­tem is:



Since I use tarsnap (8 days), I have spend 38 cents, most of this is band­width cost for the trans­fer of the ini­tial back­up (29.21 cents). Accord­ing to the graphs, I am cur­rent­ly at about 8 – 14 cents per week (or about half a dol­lar per month) for my back­ups (I still have a machine to add, and this may increase the amount in a sim­i­lar way than the Prison1 sys­tem with 2 – 3 jails). The amount of mon­ey spend in US-cents (round­ed!) per day is:


Send to Kin­dle

We got ZFS!

ZFS is there. Great! Thanks Pawel!

Now I wait a lit­tle bit until the first bugs are ironed out, and then I move all my stuff to it. The nice part: when you have 2 machines and every­thing you use is jailed, you just can do this with­out an “inter­rup­tion of ser­vice” (or at least only with a very small one). Just move the jails to the oth­er machine, replace the old FS with ZFS, and then move all jails back.

Send to Kin­dle