ioc­age: HOWTO cre­ate a base­jail from src (in­stead of from an of­fi­cial re­lease)

Back­ground

So far I have used ez­jail to man­age FreeBSD jails. I use jails since years to have dif­fer­ent parts of a soft­ware stack in some kind of a con­tain­er (in a ZFS data­set for the filesys­tem side of the con­tain­er). On one hand to not let de­pend­en­cies of one part of the soft­ware stack have in­flu­ence of oth­er parts of the soft­ware stack. On the oth­er hand to have the pos­sib­il­ity to move parts of the soft­ware stack to a dif­fer­ent sys­tem if ne­ces­sary. Nor­mally I run -stable or -cur­rent or more gen­er­ally speak­ing, a self-​compiled FreeBSD on those sys­tems. In ez­jail I like the fact that all jails on a sys­tem have one com­mon base­jail un­der­ly­ing, so that I up­date one place for the user­land and all jails get the up­dated code.

Since a while I heard good things about ioc­age and how it in­teg­rates ZFS, so I de­cided to give it a try my­self. As ioc­age does not come with an of­fi­cial way of cre­at­ing a base­jail (re­spect­ively a re­lease) from a self-​compiled FreeBSD (at least doc­u­mented in those places I looked, and yes, I am aware that I can cre­ate a FreeBSD re­lease my­self and use it, but I do not like to have to cre­ate a re­lease ad­di­tion­ally to the build­world I use to up­date the host sys­tem) here now the short HOWTO achieve this.

In­vari­ants

In the fol­low­ing I as­sume the ioc­age ZFS parts are already cre­ated in data­set ${POOLNAME}/iocage which is moun­ted on ${IOCAGE_BASE}/iocage. Ad­di­tion­ally the build­world in /​usr/​src (or wherever you have the FreeBSD source) should be fin­ished.

Pre-​requisites

To have the ne­ces­sary dataset-​infrastructure cre­ated for own basejails/​releases, at least one of­fi­cial re­lease needs to be fetched be­fore. So run the com­mand be­low (if there is no ${IOCAGE_BASE}/iocage/releases dir­ect­ory) and fol­low the on-​screen in­struc­tions.

ioc­age fetch

HOWTO

Some vari­ables:

POOLNAME=mpool
SRC_REV=r$(cd /​usr/​src; svn­litever­sion)
IOCAGE_​BASE=””

Cre­at­ing the ioc­age basejail-​datasets for this ${SRC_​REV}:

zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/bin
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/boot
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/lib
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/libexec
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/rescue
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/sbin
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/bin
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/include
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/lib
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/lib32
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/libdata
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/libexec
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/sbin
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/share
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/base/${SRC_REV}-RELEASE/root/usr/src

In­stall from /​usr/​src (the ex­ecut­able “chown” is hard­linked across an ioc­age base­jail data­set bound­ary, this fails in the nor­mal in­stall­world, so we have to ig­nore this er­ror and in­stall a copy of the chown bin­ary to the place where the hard­link nor­mally is):

cd /​usr/​src
make -i in­stall­world DESTDIR=${IOCAGE_BASE}/iocage/base/${SRC_REV}-RELEASE/root >&! iocage_installworld_base.log
cp -pv ${IOCAGE_BASE}/iocage/base/${SRC_REV}-RELEASE/root/usr/sbin/chown ${IOCAGE_BASE}/iocage/base/${SRC_REV}-RELEASE/root/usr/bin/chgrp
make dis­tri­bu­tion DESTDIR=${IOCAGE_BASE}/iocage/base/${SRC_REV}-RELEASE/root »& iocage_installworld_base.log

While we are here, also cre­ate a re­lease and not only a base­jail:

zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/releases/${SRC_REV}-RELEASE
zfs cre­ate -o compression=lz4 ${POOLNAME}/iocage/releases/${SRC_REV}-RELEASE/root
make in­stall­world DESTDIR=${IOCAGE_BASE}/iocage/releases/${SRC_REV}-RELEASE/root >&! iocage_installworld_release.log
make dis­tri­bu­tion DESTDIR=${IOCAGE_BASE}/iocage/releases/${SRC_REV}-RELEASE/root »& iocage_installworld_release.log

And fi­nally make this the de­fault re­lease which ioc­age uses when cre­at­ing new jails (this is op­tion­al):

ioc­age set release=${SRC_REV}-RELEASE de­fault

Now the self-​build FreeBSD is avail­able in ioc­age for new jails.

HOWTO: “Blind” re­mote in­stall of FreeBSD via tiny disk im­age (ZFS edi­tion)

In a past post I de­scribed how to in­stall a FreeBSD re­motely via a tiny UFS based disk im­age over a linux sys­tem. In this post I de­scribe how to do it with a ZFS based disk im­age.

In­vari­ants

Giv­en a unix based re­mote sys­tem (in this case a linux sys­tem) from which you know what kind of hard­ware it runs on (e.g. PCI IDs) and what the cor­res­pond­ing FreeBSD drivers are.

HOWTO

In the title of this post I wrote “via a tiny disk im­age”. This is true for a suit­able defin­i­tion of tiny.

What we have in the root­server are two ~900 GB hard­disks. They shall be used in a soft­ware mir­ror. The ma­chine has 8 GB of RAM. I do not ex­pect much ker­nel pan­ics (= crash dumps) there, so we do not really need >8 GB of swap (for­get the rule of hav­ing twice as much swap than RAM, with the cur­rent amount of RAM in a ma­chine you are in “trouble” when you need even the same amount of swap than RAM). I de­cided to go with 2 GB of swap.

Pushing/​pulling a 900 GB im­age over the net­work to in­stall a sys­tem is not really some­thing I want to do. I am OK to trans­fer 5 GB (that is 0.5% of the en­tire disk) to get this job done, and this is feas­ible.

First let us define some vari­ables in the shell, this way you just need to change the val­ues in one place and the rest is copy&paste (I use the SVN re­vi­sion of the source which I use to in­stall the sys­tem as the name of the sysutils/​beadm com­pat­ible boot-​dataset in the root­fs, as such I also have the re­vi­sion num­ber avail­able in a vari­able):

ROOTFS_SIZE=5G
ROOTFS_NAME=root
FILENAME=rootfs
POOLNAME=mpool
VERSION=r$(cd /​usr/​src; svn­litever­sion)
SWAPSIZE=2G

Then change your cur­rent dir­ect­ory to a place where you have enough space for the im­age. There we will cre­ate a con­tainer for the im­age, and make it ready for par­ti­tion­ing:

trun­cate -s ${ROOTFS_​SIZE} ${FILENAME}
md­con­fig -a -t vnode -f ${FILENAME}
# if you want to fully al­loc­ate
# dd if=/dev/zero of=/dev/md0 bs=1m

Cre­ate the par­ti­tion table and the root­fs (in a sysutils/​beadm com­pat­ible way – as I in­stall FreeBSD-​current there – and mount it tem­por­ary to /​temppool):

gpart cre­ate -s GPT /​dev/​md0
gpart add -s 512K -t freebsd-​boot -l bootcode0 /​dev/​md0
gpart add -a 4k -t freebsd-​swap -s ${SWAPSIZE} -l swap0 /​dev/​md0
gpart add -a 1m -t freebsd-​zfs -l ${POOLNAME}0 /​dev/​md0
gpart boot­code -b /​boot/​pmbr -p /​boot/​gptzfsboot -i 1 /​dev/​md0
# if not already the case and you want to have 4k phys­ic­al sec­tor size of the pool
# sy­scl vfs.zfs.min_auto_ashift=12
zpool cre­ate -o cachefile=/boot/zfs/zpool.cache_temp -o altroot=/temppool -O compress=lz4 -O atime=off -O utf8only=on ${POOLNAME} /dev/gpt/${POOLNAME}0
zfs cre­ate -o mountpoint=none ${POOLNAME}/ROOT
zfs cre­ate -o mountpoint=/ ${POOLNAME}/ROOT/${VERSION}
zfs cre­ate -o mountpoint=/tmp -o exec=on -o setuid=off ${POOLNAME}/tmp
zfs cre­ate -o mountpoint=/usr -o canmount=off ${POOLNAME}/usr
zfs cre­ate -o mountpoint=/home ${POOLNAME}/home
zfs cre­ate -o setuid=off ${POOLNAME}/usr/ports
zfs cre­ate ${POOLNAME}/usr/src
zfs cre­ate -o mountpoint=/var -o canmount=off ${POOLNAME}/var
zfs cre­ate -o exec=off -o setuid=off ${POOLNAME}/var/audit
zfs cre­ate -o exec=off -o setuid=off ${POOLNAME}/var/crash
zfs cre­ate -o exec=off -o setuid=off ${POOLNAME}/var/log
zfs cre­ate -o atime=on ${POOLNAME}/var/mail
zfs cre­ate -o setuid=off ${POOLNAME}/var/tmp
zfs cre­ate ${POOLNAME}/var/ports
zfs cre­ate -o exec=off -o setuid=off -o mountpoint=/shared ${POOLNAME}/shared
zfs cre­ate -o exec=off -o setuid=off ${POOLNAME}/shared/distfiles
zfs cre­ate -o exec=off -o setuid=off ${POOLNAME}/shared/packages
zfs cre­ate -o exec=off -o setuid=off -o compression=lz4 ${POOLNAME}/shared/ccache
zfs cre­ate ${POOLNAME}/usr/obj
zpool set bootfs=${POOLNAME}/ROOT/${VERSION} ${POOLNAME}

In­stall FreeBSD (from source):

cd /​usr/​src
#make build­world >&! buildworld.log
#make buildker­nel -j 8 KERNCONF=GENERIC >&! buildkernel_generic.log
make in­stall­world DESTDIR=/temppool/ >& installworld.log
make dis­tri­bu­tion DESTDIR=/temppool/ >& distrib.log
make in­stallker­nel KERNCONF=GENERIC DESTDIR=/temppool/ >& installkernel.log

Copy the tem­por­ary zpool cache cre­ated above in the pool-​creation part to the im­age (I have the im­pres­sion it is not really needed and will work without, but I have not tried this):

cp /boot/zfs/zpool.cache_temp /​temppool/​boot/​
cp /boot/zfs/zpool.cache_temp /temppool/boot/zpool.cache

Add the zfs mod­ule to loader.conf:

zfs_load=“yes”
opensolaris_load=“yes”

Now you need to cre­ate /temppool/etc/rc.conf (set the de­faultrouter, the IP ad­dress via ifconfig_​IF (and do not for­get to use the right IF for it), the host­name, set sshd_​enable to yes, zfs_enable=“YES”)  /temppool/boot/loader.conf (zfs_load=“yes”, opensolaris_load=“yes”, vfs.root.mountfrom=“zfs:${POOLNAME}/ROOT/r${VERSION}”)
/​temppool/​etc/​hosts, /temppool/etc/resolv.conf and maybe /temppool/etc/sysctl.conf and /temppool/etc/periodic.conf.

Do not al­low password-​less root lo­gins in single-​user mode on the phys­ic­al con­sole, cre­ate a resolv.conf and an user:

cd /​temppool/​etc
sed -ie „s:console.*off.:&in:“ ttys
cat >resolv.conf «EOT
search YOURDOMAIN
nameserv­er 8.8.8.8
EOT
pw -V /​temppool/​etc groupadd YOURGROUP -g 1001
pw -V /​temppool/​etc useradd YOURUSER -u 1001 -d /​home/​YOURUSER -g YOURUSER -G wheel -s /​bin/​tcsh
pw -V /​temppool/​etc user­mod YOURUSER -h 0 pw -V /​temppool/​etc user­mod root -h 0
zfs cre­ate mpool/​home/​YOURUSER
chown YOURUSER:YOURGROUP /​temppool/​home/​YOURUSER

Now you can make some more modi­fic­a­tions to the sys­tem if wanted, and then ex­port the pool and de­tach the im­age:

zpool ex­port ${POOLNAME}

md­con­fig -d -u 0

De­pend­ing on the up­load speed you can achieve, it is be­ne­fi­cial to com­press the im­age now, e.g. with bzip2. Then trans­fer the im­age to the disk of the re­mote sys­tem. In my case I did this via:

ssh –C –o CompressionLevel=9 root@remote_host dd of=/dev/hda bs=1m < /path/to/${FILENAME}

Then reboot/​power-​cycle the re­mote sys­tem.

Post-​install tasks

Now we have a new FreeBSD sys­tem which uses only a frac­tion of the the hard­disk and is not re­si­li­ent against harddisk-​failures.

FreeBSD will de­tect that the disk is big­ger than the im­age we used when cre­at­ing the GPT la­bel and warn about it (cor­rupt GPT table). To fix this and to res­ize the par­ti­tion for the zpool to use the en­tire disk we first mir­ror the zpool to the second disk and res­ize the par­ti­tion of the first disk, and when the zpool is in-​sync and then we res­ize the boot disk (at­ten­tion, you need to change the “-s” part in the fol­low­ing to match your disk size).

First backup the la­bel of the first disk, this makes it more easy to cre­ate the la­bel of the second disk:

/​sbin/​gpart backup ada0 > ada0.gpart

Edit ada0.gpart (give dif­fer­ent names for the la­bels, mainly change the num­ber 0 on the label-​name to 1) and then use it to cre­ate the par­ti­tion of the second disk:

gpart re­store -Fl ada1 < ada0.gpart
gpart res­ize -i 3 -a 4k -s 929g ada1
gpart boot­code -b /​boot/​pmbr -p /​boot/​gptzfsboot -i 1 ada1
zpool set autoexpand=on mpool

Fix the warn­ing about the GPT la­bel and res­ize the par­ti­tion:

gpart re­cov­er ada0
gpart res­ize -i 3 -a 4k -s 929g ada0

Af­ter­wards it should look sim­il­ar to this:

gpart show -l
=>        40  1953525088  ada0  GPT  (932G)
          40        1024     1  bootcode0  (512K)
        1064     4194304     2  swap0  (2.0G)
     4195368         984        – free –  (492K)
     4196352  1948254208     3  mpool0  (929G)
  1952450560     1074568        – free –  (525M)

=>        40  1953525088  ada1  GPT  (932G)
          40        1024     1  bootcode1  (512K)
        1064     4194304     2  swap1  (2.0G)
     4195368         984        – free –  (492K)
     4196352  1948254208     3  mpool1  (929G)
  1952450560     1074568        – free –  (525M)

Add the second disk to the zpool:

zpool at­tach mpool gpt/​mpool0 gpt/​mpool1

When the mir­ror is in sync (zpool status mpool), we can ex­tend the size of the pool it­self:

zpool off­line mpool /​dev/​gpt/​mpool0
zpool on­line mpool /​dev/​gpt/​mpool0

As a last step we can add now an en­cryp­ted swap (de­pend­ing on the im­port­ance of the sys­tem maybe a gmirror-​ed one – not ex­plained here), and spe­cify where to dump (text-​dumps) on.

/boot/loader.conf:

dumpdev=“/dev/ada0p2”

/etc/rc.conf:

dumpdev=“/dev/gpt/swap0”
crashinfo_enable=“YES”
ddb_enable=“yes”
encswap_enable=“YES”
geli_swap_flags=”-a hmac/​sha256 -l 256 -s 4096 -d”

/​etc/​fstab:

# Device        Moun­t­point      FStype  Op­tions                 Dump    Pass#
/dev/ada1p2.eli none    swap    sw      0 0

Now the sys­tem is ready for some ap­plic­a­tions.

Trans­ition to nginx: part 4 – CGI scripts

I still have some CGI scripts on this web­site. They still work, and they are good enough for my needs. When I switched this web­site to nginx (the word­press setup was a little bit more com­plex than what I wrote in part 1, part 2 and part 3… the con­fig will be one of my next blog posts) I was a little bit puzzled how to do that with nginx. It took me some minutes to get an idea how to do it and to find the right FreeBSD port for this.

  • In­stall www/​fcgiwrap
  • Add the fol­low­ing to rc.conf:

fcgiwrap_enable=“YES”
fcgiwrap_user=“www”

  • Run “ser­vice fc­gi­wrap start”
  • Add the fol­low­ing to your nginx con­fig:
loc­a­tion ^~ /​cgi-​bin/​ {
    gzip off; #gzip makes scripts feel slower since they have to com­plete be­fore get­ting gzipped
    fastcgi_​pass  unix:/var/run/fcgiwrap/fcgiwrap.sock;
    fastcgi_​index index.cgi;
    fastcgi_​param SCRIPT_​FILENAME /path/to/location$fastcgi_script_name;
    fastcgi_​param GATEWAY_​INTERFACE  CGI/1.1;
}

Tran­si­tion to nginx: part 3 — short and easy con­fig snip­pets

After some medium-​difficoulty trans­itions in part 1 and part 2, here some easy ones:

phpMy­Ad­min: take the ba­sics from one of the two oth­er blog posts (see above) without loc­a­tion dir­ect­ives. For “loc­a­tion /​” set the doc­u­ment root and copy the “loc­a­tion ~ \.php” from the con­fig of one of the parts above. Done.

TT-​RSS: take the con­fig like for phpMy­Ad­min and add (as­sum­ing it is in the root of the serv­er, else you have to add the path in the front of the loc­a­tion)

loc­a­tion ^~ /(utils|templates|schema|cache|lock|locale|classes) {
     deny all;
}

Al­low client-​side cach­ing for stat­ic con­tent:

loc­a­tion ~* \.(?:jpe?g|gif|png|ico|cur|gz|bz2|xz|tbz|tgz|txz|svg|svgz|mp4|ogg|ogv|webm|htc|css|js|
pdf|zip|rar|tar|txt|conf)$ {
    try_​files $uri =404;

    ex­pires 1w;     # If you are not a big site,

                    # and don’t change stat­ic con­tent of­ten,

                    # 1 week is not bad.
    access_​log off; # If you don’t need the logs
    add_​header Cache-​Control “pub­lic”;
}

Se­cur­ity: Des­pite the fact that the docs I’ve read tell that no-​SSLv3 is the de­fault, the first set­ting makes a dif­fer­ence (tested via SSLlabs“ SSLtest).

ssl_​protocols TLSv1 TLSv1.1 TLSv1.2; # No SSLv 23
ssl_​dhparam /path/to/dhparams.pem;   # gen­er­ate via “openssl dh­param -out /path/to/dhparams.pem 2048”

 

Trans­ition to nginx: part 2 – con­vert­ing a gal­lery v2 in­stall­a­tion

In my first trans­ition to nginx I wrote that I was happy about the speed in­crease I got for my Horde web­mail setup. Af­ter­wards I con­ver­ted a Gal­lery v2 in­stall­a­tion (yes, old, not un­der act­ive de­vel­op­ment any­more, but in­tern­al and still work­ing). There I have not seen any ob­vi­ous speed dif­fer­ence.

I did not con­vert all .htac­cess re­write rules, the one for the “easy and beau­ti­ful” URL names was too com­plex for the con­vert­er for re­write I found. As it is just for in­tern­al use, I just switched back to the not so nice “tech­nic­al” URL names.

The im­port­ant part of the apache 2.2 in­stall­a­tion:

Ex­piresAct­ive On
Ex­piresDe­fault “now plus 1 hour”
Ex­pires­By­Type image/​* “now plus 1 month”
Ex­pires­By­Type text/​javascript “now plus 1 month”
Ex­pires­By­Type application/​x-​javascript “now plus 1 month”
Ex­pires­By­Type text/​css “now plus 1 month”

<Loc­a­tion /​>
# In­sert fil­ter
SetOut­put­Fil­ter DEFLATE

# Nets­cape 4.x has some prob­lems…
Browser­Match ^Mozilla/​4 gzip-​only-​text/​html

# Nets­cape 4.06−4.08 have some more prob­lems
Browser­Match ^Mozilla/4\.0[678] no-​gzip

# MSIE mas­quer­ades as Nets­cape, but it is fine
Browser­Match \bM­SIE !no-​gzip !gzip-​only-​text/​html
# Don’t com­press im­ages
SetEn­vI­fNoCase Request_​URI \
\.(?:gif|jpe?g|png|gz|bz2|zip|pdf)$ no-​gzip dont-​vary

# Make sure prox­ies don’t de­liv­er the wrong con­tent
Head­er ap­pend Vary User-​Agent env=!dont-vary
</​Location>

The nginx con­fig:

worker_​processes  1;

error_​log  <fi­le­name>;

events {
        worker_​connections      1024;
        use                     kqueue;
}


ht­tp {
    in­clude       mime.types;
    default_​type  application/​octet-​stream;

    access_​log  <fi­le­name>;

    send­file on;

        keepalive_​timeout       15;
        client_​body_​timeout     300;
        client_​header_​timeout   12;
        send_​timeout            300;
        client_​body_​in_​file_​only clean;
        client_​body_​buffer_​size 128k;
        client_​max_​body_​size 40M;

        gzip on;
        gzip_​min_​length 1000;
        gzip_​types       text/​plain text/​xml text/​css application/​xml application/xhtml+xml application/rss+xml application/​javascript application/​x-​javascript;
        gzip_​disable     “msie6”;

        in­clude blacklist.conf;

    serv­er {
        listen       80;
        server_​name  <host­name>;

        add_​header   x-​frame-​options            “same­ori­gin”;
        add_​header   x-​xss-​protection           “1; mode=block”;
        add_​header   x-​content-​type-​options     “nos­niff”;

        char­set utf-​8;

        #access_​log  logs/host.access.log  main;
        if ($bad_​client) { re­turn 403; }

        loc­a­tion /​ {
            root   /​usr/​local/​www/​gallery2;
            in­dex  index.php;
                loc­a­tion ~ \.php {
                        # Zero-​day ex­ploit de­fense.
                        # http://​for​um​.nginx​.org/​r​e​a​d​.​p​h​p​?​2​,​8​8​8​4​5​,​p​a​g​e=3
                        # Won’t work prop­erly (404 er­ror) if the file is not stored on this serv­er, which is en­tirely pos­sible with php-​fpm/​php-​fcgi.
                        # Com­ment the „try_​files“ line out if you set up php-​fpm/​php-​fcgi on an­oth­er ma­chine.  And then cross your fin­gers that you won’t get hacked.
                        try_​files $uri =404;

                        fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                        fastcgi_​keep_​conn on;
                        fastcgi_​index      index.php;
                        in­clude          fastcgi_​params;
                        fastcgi_​param      SCRIPT_​FILENAME $document_root$fastcgi_script_name;
                        fastcgi_​pass        unix:/var/run/php.fcgi;
                }
        }

        # re­dir­ect serv­er er­ror pages to the stat­ic page /50x.html
        #
        error_​page   500 502 503 504  /50x.html;
        loc­a­tion = /50x.html {
            root   /​usr/​local/​www/​nginx-​dist;
        }

        # deny ac­cess to .htac­cess files, if Apache’s doc­u­ment root
        # con­curs with nginx’s one
        #
        loc­a­tion ~ /\.ht {
            deny all;
        }
        loc­a­tion ~ \.(inc|class)$ {
                deny all;
        }
        loc­a­tion ^~ /​lib/​tools/​po/​ {
                deny all;
        }
    }
}

Trans­ition to nginx: part 1, con­vert­ing Horde web­mail

I am a long­time apache user. It may be that I touched apache 1.2 for the first time. Re­cently I de­cided to check out nginx. So I de­cided to do it on my own web­mail sys­tem. The end res­ult is, I re­placed apache+mod_php with nginx+php-fpm there, and so far it does not look like I want to go back (it feels faster on dir­ect com­pare, same serv­er, either apache or nginx star­ted to com­pare the speed, pure sub­ject­ive “meas­ure­ment”, no num­bers).

The long story now.

The web­mail sys­tem uses horde. There I had apache 2.4 (pre­fork mpm) and php 5.6 via mod_​php. With nginx I used php-​fpm. I used the same php flags and val­ues in php_​fpm like I used with mod_​php. I con­figured less php-​fpm max-​processes than I had al­lowed apache+mod_php to use. As nginx is not spawn­ing pro­cesses for each con­nec­tion, I have less pro­cesses and also less memory al­loc­ated as a res­ult. Con­vert­ing the re­write rules and mod_​rewrite based black­list­ing took a while (and I have not con­ver­ted all black­lists I had be­fore). So yes, this is not really com­par­ing apples with apples (I could have tried a dif­fer­ent mpm for apache, and I could have used an fcgi based php ap­proach in­stead of mod_​php, I could have moved the re­write rules out from .htac­cess files to the main con­fig).

The res­ult was on first lo­gin that the pages ap­peared no­tice­able faster. I dir­ectly switched back to apache to con­firm. This was over a WLAN con­nec­tion. A little bit later I had the same im­pres­sion when I tested this over a slow DSL link (2 MBit/​s).

Here the im­port­ant parts of the apache con­fig:

Ex­piresAct­ive On
Ex­piresDe­fault “now plus 3 hours”
Ex­pires­By­Type image/​* “now plus 2 months”
Ex­pires­By­Type text/​javascript “now plus 2 months”
Ex­pires­By­Type application/​x-​javascript “now plus 2 months”
Ex­pires­By­Type text/​css “now plus 2 months”

SetEn­vI­fNoCase Request_​URI “\.gif$” cache_me=1
SetEn­vI­fNoCase Request_​URI “\.png$” cache_me=1
SetEn­vI­fNoCase Request_​URI “\.jpg$” cache_me=1
SetEn­vI­fNoCase Request_​URI “\.jpeg$” cache_me=1
SetEn­vI­fNoCase Request_​URI “\.ico$” cache_me=1
SetEn­vI­fNoCase Request_​URI “\.css$” cache_me=1
# Al­low cach­ing on me­dia files
<If­Mod­ule mod_headers.c>
Head­er merge Cache-​Control “pub­lic” env=cache_me
  <If­Mod­ule ssl_module.c>
    Head­er add Strict-​Transport–Se­cur­ity “max-age=15768000”
  </​IfModule>
</​IfModule>
<Loc­a­tion /​>
# In­sert fil­ter
SetOut­put­Fil­ter DEFLATE

# Nets­cape 4.x has some prob­lems…
Browser­Match ^Mozilla/​4 gzip-​only-​text/​html

# Nets­cape 4.06−4.08 have some more prob­lems
Browser­Match ^Mozilla/4\.0[678] no-​gzip

# MSIE mas­quer­ades as Nets­cape, but it is fine
Browser­Match \bM­SIE !no-​gzip !gzip-​only-​text/​html
# Don’t com­press im­ages
SetEn­vI­fNoCase Request_​URI \
\.(?:gif|jpe?g|png|gz|bz2|zip)$ no-​gzip dont-​vary

# Make sure prox­ies don’t de­liv­er the wrong con­tent
Head­er ap­pend Vary User-​Agent env=!dont-vary
</​Location>
SetEn­vI­fNoCase Request_​URI “\.js$” cache_me=1
Ali­as /​Microsoft-​Server-​ActiveSync /​usr/​local/​www/​horde/rpc.php
Re­dir­ect­Per­man­ent /.well-known/carddav /horde/rpc.php
Ac­cept­Path­Info on

And here most parts of my nginx.conf suit­able for Horde, in­clud­ing the re­write rules from .htac­cess files:

worker_​processes  1;

error_​log  <file>;

events {
        worker_​connections      1024;
        use                     kqueue;
}


ht­tp {
        in­clude    mime.types;
        default_​type  application/​octet-​stream;

        access_​log  <file>;

        send­file on;
        keepalive_​timeout       15;
        client_​body_​timeout     300;
        client_​header_​timeout   12;
        send_​timeout        300;
        client_​body_​in_​file_​only clean;
        client_​body_​buffer_​size 128k;
        client_​max_​body_​size 10M;

        gzip on;
        gzip_​min_​length 1000;
        gzip_​types       text/​plain text/​xml text/​css application/​xml application/xhtml+xml application/rss+xml application/​javascript application/​x-​javascript;
        gzip_​disable     “msie6”;

        in­clude blacklist.conf;

        serv­er {
                listen     443 ssl sp­dy;
                server_​name  <host­name>;

                ssl_​certificate         <file>;
                ssl_​certificate_​key     <file>;

                ssl_​session_​cache       shared:SSL:10m;
                ssl_​session_​timeout     15m;
                ssl_​ciphers     <ciper_​list>;
                ssl_​prefer_​server_​ciphers  on;

                # op­tion­al: see https://​www​.owasp​.org/​i​n​d​e​x​.​p​h​p​/​L​i​s​t​_​o​f​_​u​s​e​f​u​l​_​H​T​T​P​_​h​e​a​d​ers
                add_​header                        strict-​transport-​security “max-age=31536000”;
                add_​header                        x-​frame-​options          “same­ori­gin”;
                add_​header                        x-​xss-​protection        “1; mode=block”;
                add_​header                        x-​content-​type-​options        “nos­niff”;

                root                            /​usr/​local/​www/​horde;
                in­dex                              index.php;

                char­set utf8;

                access_​log  <log­file>;
                if ($bad_​client) { re­turn 403; }

                loc­a­tion /​ {
                        loc­a­tion /​Microsoft-​Server-​ActiveSync {
                                ali­as                   /usr/local/www/horde/rpc.php;
                                fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                                fastcgi_​keep_​conn on;
                                in­clude                 fastcgi_​params;
                                fastcgi_​param           SCRIPT_​FILENAME /usr/local/www/horde/rpc.php;
                                fastcgi_​pass            unix:/var/run/php.fcgi;
                        }

                        loc­a­tion /autodiscover/autodiscover.xml {
                                ali­as                   /usr/local/www/horde/rpc.php;
                                fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                                fastcgi_​keep_​conn on;
                                in­clude                 fastcgi_​params;
                                fastcgi_​param           SCRIPT_​FILENAME /usr/local/www/horde/rpc.php;
                                fastcgi_​pass            unix:/var/run/php.fcgi;
                        }

                        loc­a­tion /Autodiscover/Autodiscover.xml {
                                ali­as                   /usr/local/www/horde/rpc.php;
                                fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                                fastcgi_​keep_​conn on;
                                in­clude                 fastcgi_​params;
                                fastcgi_​param           SCRIPT_​FILENAME /usr/local/www/horde/rpc.php;
                                fastcgi_​pass            unix:/var/run/php.fcgi;
                        }

                        loc­a­tion ^~ /(static|themes)/ {
                                ex­pires                 1w;
                                add_​header              Cache-​Control pub­lic;
                        }
                        loc­a­tion ^~ /services/ajax.php {
                                fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                                fastcgi_​keep_​conn on;
                                in­clude                 fastcgi_​params;
                                fastcgi_​param           SCRIPT_​FILENAME $document_root$fastcgi_script_name;
                                fastcgi_​pass            unix:/var/run/php.fcgi;
                        }

                        loc­a­tion ~ \.php {
                                # Zero-​day ex­ploit de­fense.
                                # http://​for​um​.nginx​.org/​r​e​a​d​.​p​h​p​?​2​,​8​8​8​4​5​,​p​a​g​e=3
                                # Won’t work prop­erly (404 er­ror) if the file is not stored on this serv­er, which is en­tirely pos­sible with php-​fpm/​php-​fcgi.
                                # Com­ment the „try_​files“ line out if you set up php-​fpm/​php-​fcgi on an­oth­er ma­chine.  And then cross your fin­gers that you won’t get hacked.
                                try_​files $uri =404;

                                fastcgi_​split_​path_​info ^(.+\.php)(/.+)$;
                                fastcgi_​keep_​conn on;
                                fastcgi_​index           index.php;
                                in­clude                 fastcgi_​params;
                                fastcgi_​param           SCRIPT_​FILENAME $document_root$fastcgi_script_name;
                                fastcgi_​pass            unix:/var/run/php.fcgi;
                        }

                        try_​files                          $uri $uri/​ /rampage.php?$args;
                }
        # re­dir­ect serv­er er­ror pages to the stat­ic page /50x.html
        #
        error_​page   500 502 503 504  /50x.html;
        loc­a­tion = /50x.html {
                root   /​usr/​local/​www/​nginx-​dist;
        }

        # deny ac­cess to .htac­cess files, if Apache’s doc­u­ment root
        # con­curs with nginx’s one
        #
                loc­a­tion ~ /\.ht {
                        deny all;
                }
                loc­a­tion ~ /(config|lib|locale|po|scripts|templates)/ {
                        deny all;
                }
                loc­a­tion ^~ /​rpc/​ {
                        if (!-e $request_​filename){
                                re­write ^(.*)$ /rpc/index.php/$1 break;
                        }
                }
                loc­a­tion ^~ /​kronolith/​feed/​ {
                        if (!-e $request_​filename){
                                re­write ^(.*)$ /kronolith/feed/index.php?c=$1 break;
                        }
                }
                loc­a­tion ^~ /​content/​ {
                        if (!-e $request_​filename){
                                re­write ^(.*)$ /content/index.php break;
                        }
                }
                loc­a­tion ^~ /whups/(queue|query)/ {
                        if (!-e $request_​filename){
                                re­write ^/([0 – 9]+)/?$ /whups/queue/index.php?id=$1;
                        }
                        re­write ^/([0 – 9]+)/rss/?$ /whups/queue/rss.php?id=$1;
                        re­write ^/([a-zA-Z0-9_]+)/?$ /whups/queue/index.php?slug=$1;
                        re­write ^/([a-zA-Z0-9_]+)/rss/?$ /whups/queue/rss.php?slug=$1;
                }
                loc­a­tion ^~ /​whups/​ticket/​ {
                        if (!-e $request_​filename){
                                re­write ^/([0 – 9]+)/?$ /whups/ticket/index.php?id=$1;
                        }
                        re­write ^/([0 – 9]+)/rss/?$ /whups/ticket/rss.php?id=$1;
                        re­write ^/([0 – 9]+)/([a-z]+)(\.php)?$ /whups/ticket/$2.php?id=$1 break;
                }
                loc­a­tion ^~ /.well-known/carddav {
                        re­turn 301 https://​web​mail​.Leidinger​.net/​r​p​c​.​php;
                }
                loc­a­tion ^~ /​admin/​ {
                        al­low <loc­al>;
                        deny all;
                }
                loc­a­tion ~ /test.php$ {
                        al­low <loc­al>;
                        deny all;
                }

                # Me­dia: im­ages, icons, video, au­dio, HTC, archives
                loc­a­tion ~* \.(?:jpe?g|gif|png|ico|cur|gz|bz2|tbz|tgz|svg|svgz|mp4|ogg|ogv|webm|htc|css|js|pdf|zip|rar|tar|txt|pl|conf)$ {
                        try_​files $uri =404;

                        ex­pires 1w;
                        access_​log off;
                        add_​header Cache-​Control “pub­lic”;
                }
        }
}

Es­sen Hack­thon 2015 – last day status

I com­mit­ted the 64bit sup­port for the linux base ports (dis­abled by de­fault, check the com­mit mes­sage), but this broke the INDEX build. Port­m­gr was faster than me to re­vert it. All er­rors are mine. I think most of the work is done, I just need to find out what the cor­rect way is to handle this make/​fmake dif­fer­ence (mal­formed con­di­tion­al).