A while ago (end of August 2009) I was contacted by a lawyer because of my participation in the LAME-project. It was about the MP3-patents. They searched an expert witness for a case.
I had the impression that it is about the invalidation of at least parts of one of the patents. Maybe they have a client which was sued for infringement. Unfortunately for them I have absolutely no clue what is inside the MP3-patents (I am/was taking care about the “glue” in LAME) and the phone call we had was just some hours before I went into holiday. I referred him to two other developers of the LAME-project which not only should have better knowledge about the parts the lawyer is interested in, but also where probably not in holiday.
We also had a little chat about patents in general, and my opinion was that software patents are not that useful. In the IT world 3 years is a lot of time, technology is already overtaken by new developments most of the time after this time. When assuming that developing something new depending on some technology seen at another place takes at least about 1 year (do not hit me because of this rough estimation without specifying the size of the project or the quality requirements), a software patent which is valid longer than 5 years is more than enough in my opinion. Any company which was not able to make some money with it during this time made something wrong, and blocking the competition because of this is not really a good idea from my point of an user of technology. As an user I want advancements. And as an open source developer I try to produce my own advancements when I can not get them from somewhere else. In this light software patents are not doing good for the “advancement of the human race”.
The lawyer did not try to convince me to the opposite. Either has was too polite, did not care about it, or he silently agrees. He told me he wants to stay in touch with me in some way regarding Open Source and patents. I did not object to this.
As I was curious about the state of this, I contacted the lawyer about it, and the current outcome is not bad. Previously a lot of tries (by other lawyers in the same German court) failed to fight against the particular patent. This time the court did not follow his previous rulings but told that this issue needs to be investigated again (at least this is how I understand this — beware, I am not a lawyer). Maybe we can see a result this year.
GD Star Rating
GD Star Rating
Tags: expert witness
, lame project
, light software
, open source developer
, quality requirements
, rough estimation
, software patent
, software patents
, time technology
Currently I play a little bit around with my ZFS setup. I want to make it faster, but I do not want to spend a lot of money.
The disks are connected to an ICH 5 controller, so an obvious improvement would be to either buy a controller for the PCI slot which is able to do NCQ with the SATA disks (a siis(4) based one is not cheap), or to buy a new system which comes with a chipset which knows how to do NCQ (this would mean new RAM, new CPU, new MB and maybe even a new PSU). A new controller is a little bit expensive for the old system which I want to tune. A new system would be nice, and reading about the specs of new systems lets me want to get a Core i5 system. The problem is that I think the current offers of mainboards for this are far from good. The system should be a little bit future proof, as I would like to use it for about 5 years or more (the current system is somewhere between 5 – 6 years old). This means it should have SATA-3 and USB 3, but when I look at what is offered currently it looks like there are only beta-versions of hardware with SATA-3 and USB 3 support available on the marked (according to tests there is a lot of variance of the max speed the controllers are able to achieve, bugs in the BIOS, or the controllers are attached to a slow bus which prevents to use the full bandwidth). So it will not be a new system soon.
As I had a 1GB USB-stick around, I decided to attach it to the one of the EHCI USB ports and use it as a cache device for ZFS. If someone wants to try this too, be careful with the USB ports. My mainboard has only 2 USB ports connected to an EHCI, the rest are UHCI ones. This means that only 2 USB ports are fast (sort of… 40 MBit/s), the rest is only usable for slow things like a mouse, keyboard or a serial line.
Be warned, this will not give you a lot of bandwidth (if you have a fast USB stick, the 40MBit/s of the EHCI are the limit which prevent a big streaming bandwidth), but the latency of the cache device is great when doing small random IO. When I do a gstat and have a look how long a read operation takes for each involved device, I see something between 3 msec and 20 msec for the harddisks (depending if they are reading something at the current head position, or if the harddisk needs to seek around a lot). For the cache device (the USB stick) I see something between around 1 mssec and 5 msec. That is 1/3th to 1/4th of the latency of the harddisks.
With a “zfs send” I see about 300 IOops per harddisk (3 disks in a RAIDZ). Obviously this is an optimum streaming case where the disks do not need to seek around a lot. You see this in the low latency, it is about 2 msec in this case. In the random-read case, like for example when you run a find, the disks can not keep this amount of IOops, as they need to seek around. And here the USB-stick shines. I’ve seen upto 1600 IOops on it during running a find (if the corresponding data is in the cache, off course). This was with something between 0.5 and 0.8 msec of latency.
This is the machine at home which is taking care about my mails (incoming and outgoing SMTP, IMAP and Webmail), has a squid proxy and acts as a file server. There are not many users (just me and my wife) and there is no regular usage pattern for all those services. Because of this I did not do any benchmark to see how much time I can gain with various workloads (and I am not interested in some artificial performance numbers of my webmail session, as the browsing experience is highly subjective in this case). For this system a 1 GB USB stick (which was just collecting dust before) seems to be a cheap way to improve the response time for often used small data. When I use the webmail interface now, my subjective impression is, that it is faster. I am talking about listing emails (subject, date, sender, size) and displaying the content of some emails. FYI, my maildir storage has 849 MB with 35000 files in 91 folders.
Bottom line is: do not expect a lot of bandwidth increase with this, but if you have a workload which generates random read requests and you want to decrease the read latency, it could be a cheap solution to add a (big) USB stick as a cache device.
GD Star Rating
GD Star Rating
Tags: beta versions
, future proof
, max speed
, mouse keyboard
, pci slot
, serial line
, slow bus
At work we have some performance problems.
One application (not off-the-shelf software) is not performing good. The problem is that the design of the application is far from good (auto-commit is used, and the Oracle DB is doing too much writes for what the application is supposed to do because of this). During helping our DBAs in their performance analysis (the vendor of the application is telling our hardware is not fast enough and I had to provide some numbers to show that this is not the case and they need to improve the software as it does not comply to the performance requirements they got before developing the application) I noticed that the filesystem where the DB and the application are located (a ZFS if someone is interested) is doing sometimes 1.200 IO (write) operations per second (to write about 100 MB). Yeah, that is a lot of IOops our SAN is able to do! Unfortunately too expensive to buy for use at home.
Another application (nagios 3.0) was generating a lot of major faults (caused by a lot of fork()s for the checks). It is a SunFire V890, and the highest number of MF per second I have seen on this machine was about 27.000. It never went below 10.000. On average maybe somewhere between 15.000 and 20.000. My Solaris–Desktop (an Ultra 20) is generating maybe several hundred MF if a lot is going on (most of the time is does not generate much). Nobody can say the V890 is not used… Oh, yes, I suggested to enable the nagios config setting for large sites, now the major faults are around 0 – 10.000 and the machine is not that stressed anymore. The next step is probably to have a look at the ancient probes (migrated from the big brother setup which was there several years before) and reduce the number of forks they do.
GD Star Rating
GD Star Rating
Tags: big brother
, oracle db
, performance analysis
, performance problems
, performance requirements
, shelf software