Alexander Leidinger

Just another weblog

Feb
22

Con­tacted by a lawyer regard­ing MP3

A while ago (end of August 2009) I was con­tacted by a lawyer because of my par­tic­i­pa­tion in the LAME-project. It was about the MP3-patents. They searched an expert wit­ness for a case.

I had the impres­sion that it is about the inval­i­da­tion of at least parts of one of the patents. Maybe they have a client which was sued for infringe­ment. Unfor­tu­nately for them I have absolutely no clue what is inside the MP3-patents (I am/was tak­ing care about the “glue” in LAME) and the phone call we had was just some hours before I went into hol­i­day. I referred him to two other devel­op­ers of the LAME-project which not only should have bet­ter knowl­edge about the parts the lawyer is inter­ested in, but also where prob­a­bly not in holiday.

We also had a lit­tle chat about patents in gen­eral, and my opin­ion was that soft­ware patents are not that use­ful. In the IT world 3 years is a lot of time, tech­nol­ogy is already over­taken by new devel­op­ments most of the time after this time. When assum­ing that devel­op­ing some­thing new depend­ing on some tech­nol­ogy seen at another place takes at least about 1 year (do not hit me because of this rough esti­ma­tion with­out spec­i­fy­ing the size of the project or the qual­ity require­ments), a soft­ware patent which is valid longer than 5 years is more than enough in my opin­ion. Any com­pany which was not able to make some money with it dur­ing this time made some­thing wrong, and block­ing the com­pe­ti­tion because of this is not really a good idea from my point of an user of tech­nol­ogy. As an user I want advance­ments. And as an open source devel­oper I try to pro­duce my own advance­ments when I can not get them from some­where else. In this light soft­ware patents are not doing good for the “advance­ment of the human race”.

The lawyer did not try to con­vince me to the oppo­site. Either has was too polite, did not care about it, or he silently agrees. He  told me he wants to stay in touch with me in some way regard­ing Open Source and patents. I did not object to this.

As I was curi­ous about the state of this, I con­tacted the lawyer about it, and the cur­rent out­come is not bad. Pre­vi­ously a lot of tries (by other lawyers in the same Ger­man court) failed to fight against the par­tic­u­lar patent. This time the court did not fol­low his pre­vi­ous rul­ings but told that this issue needs to be inves­ti­gated again (at least this is how I under­stand this — beware, I am not a lawyer). Maybe we can see a result this year.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,
Feb
10

Mak­ing ZFS faster…

Cur­rently I play a lit­tle bit around with my ZFS setup. I want to make it faster, but I do not want to spend a lot of money.

The disks are con­nected to an ICH 5 con­troller, so an obvi­ous improve­ment would be to either buy a con­troller for the PCI slot which is able to do NCQ with the SATA disks (a siis(4) based one is not cheap), or to buy a new sys­tem which comes with a chipset which knows how to do NCQ (this would mean new RAM, new CPU, new MB and maybe even a new PSU). A new con­troller is a lit­tle bit expen­sive for the old sys­tem which I want to tune. A new sys­tem would be nice, and read­ing about the specs of new sys­tems lets me want to get a Core i5 sys­tem. The prob­lem is that I think the cur­rent offers of main­boards for this are far from good. The sys­tem should be a lit­tle bit future proof, as I would like to use it for about 5 years or more (the cur­rent sys­tem is some­where between 5 – 6 years old). This means it should have SATA-3 and USB 3, but when I look at what is offered cur­rently it looks like there are only beta-versions of hard­ware with SATA-3 and USB 3 sup­port avail­able on the marked (accord­ing to tests there is a lot of vari­ance of the max speed the con­trollers are able to achieve, bugs in the BIOS, or the  con­trollers are attached to a slow bus which pre­vents to use the full band­width). So it will not be a new sys­tem soon.

As I had a 1GB USB-stick around, I decided to attach it to the one of the EHCI USB ports and use it as a cache device for ZFS. If some­one wants to try this too, be care­ful with the USB ports. My main­board has only 2 USB ports con­nected to an EHCI, the rest are UHCI ones. This means that only 2 USB ports are fast (sort of… 40 MBit/s), the rest is only usable for slow things like a mouse, key­board or a ser­ial line.

Be warned, this will not give you a lot of band­width (if you have a fast USB stick, the 40MBit/s of the EHCI are the limit which pre­vent a big stream­ing band­width), but the latency of the cache device is great when doing small ran­dom IO. When I do a gstat and have a look how long a read oper­a­tion takes for each involved device, I see some­thing between 3 msec and 20 msec for the hard­disks (depend­ing if they are read­ing some­thing at the cur­rent head posi­tion, or if the hard­disk needs to seek around a lot). For the cache device (the USB stick) I see some­thing between around 1 mssec and 5 msec. That is 1/3th to 1/4th of the latency of the harddisks.

With a “zfs send” I see about 300 IOops per hard­disk (3 disks in a RAIDZ). Obvi­ously this is an opti­mum stream­ing case where the disks do not need to seek around a lot. You see this in the low latency, it is about 2 msec in this case. In the random-read case, like for exam­ple when you run a find, the disks can not keep this amount of IOops, as they need to seek around. And here the USB-stick shines. I’ve seen upto 1600 IOops on it dur­ing run­ning a find (if the cor­re­spond­ing data is in the cache, off course). This was with some­thing between 0.5 and 0.8 msec of latency.

This is the machine at home which is tak­ing care about my mails (incom­ing and out­go­ing SMTP, IMAP and Web­mail), has a squid proxy and acts as a file server. There are not many users (just me and my wife) and there is no reg­u­lar usage pat­tern for all those ser­vices. Because of this I did not do any bench­mark to see how much time I can gain with var­i­ous work­loads (and I am not inter­ested in some arti­fi­cial per­for­mance num­bers of my web­mail ses­sion, as the brows­ing expe­ri­ence is highly sub­jec­tive in this case). For this sys­tem a 1 GB USB stick (which was just col­lect­ing dust before) seems to be a cheap way to improve the response time for often used small data. When I use the web­mail inter­face now, my sub­jec­tive impres­sion is, that it is faster. I am talk­ing about list­ing emails (sub­ject, date, sender, size) and dis­play­ing the con­tent of some emails. FYI, my maildir stor­age has 849 MB with 35000 files in 91 folders.

Bot­tom line is: do not expect a lot of band­width increase with this, but if you have a work­load which gen­er­ates ran­dom read requests and you want to decrease the read latency, it could be a cheap solu­tion to add a (big) USB stick as a cache device.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,
Feb
05

Show­ing off some numbers…

At work we have some per­for­mance prob­lems.

One appli­ca­tion (not off-the-shelf soft­ware) is not per­form­ing good. The prob­lem is that the design of the appli­ca­tion is far from good (auto-commit is used, and the Ora­cle DB is doing too much writes for what the appli­ca­tion is sup­posed to do because of this). Dur­ing help­ing our DBAs in their per­for­mance analy­sis (the ven­dor of the appli­ca­tion is telling our hard­ware is not fast enough and I had to pro­vide some num­bers to show that this is not the case and they need to improve the soft­ware as it does not com­ply to the per­for­mance require­ments they got before devel­op­ing the appli­ca­tion) I noticed that the filesys­tem where the DB and the appli­ca­tion are located (a ZFS if some­one is inter­ested) is doing some­times 1.200 IO (write) oper­a­tions per sec­ond (to write about 100 MB). Yeah, that is a lot of IOops our SAN is able to do! Unfor­tu­nately too expen­sive to buy for use at home. :(

Another appli­ca­tion (nagios 3.0) was gen­er­at­ing a lot of major faults (caused by a lot of fork()s for the checks). It is a Sun­Fire V890, and the high­est num­ber of MF per sec­ond I have seen on this machine was about 27.000. It never went below 10.000. On aver­age maybe some­where between 15.000 and 20.000. My Solaris–Desk­top (an Ultra 20) is gen­er­at­ing maybe sev­eral hun­dred MF if a lot is going on (most of the time is does not gen­er­ate much). Nobody can say the V890 is not used… :) Oh, yes, I sug­gested to enable the nagios con­fig set­ting for large sites, now the major faults are around 0 – 10.000 and the machine is not that stressed any­more. The next step is prob­a­bly to have a look at the ancient probes (migrated from the big brother setup which was there sev­eral years before) and reduce the num­ber of forks they do.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,