At work we have some performance problems.
One application (not off-the-shelf software) is not performing good. The problem is that the design of the application is far from good (auto-commit is used, and the Oracle DB is doing too much writes for what the application is supposed to do because of this). During helping our DBAs in their performance analysis (the vendor of the application is telling our hardware is not fast enough and I had to provide some numbers to show that this is not the case and they need to improve the software as it does not comply to the performance requirements they got before developing the application) I noticed that the filesystem where the DB and the application are located (a ZFS if someone is interested) is doing sometimes 1.200 IO (write) operations per second (to write about 100 MB). Yeah, that is a lot of IOops our SAN is able to do! Unfortunately too expensive to buy for use at home.
Another application (nagios 3.0) was generating a lot of major faults (caused by a lot of fork()s for the checks). It is a SunFire V890, and the highest number of MF per second I have seen on this machine was about 27.000. It never went below 10.000. On average maybe somewhere between 15.000 and 20.000. My Solaris–Desktop (an Ultra 20) is generating maybe several hundred MF if a lot is going on (most of the time is does not generate much). Nobody can say the V890 is not used… Oh, yes, I suggested to enable the nagios config setting for large sites, now the major faults are around 0 – 10.000 and the machine is not that stressed anymore. The next step is probably to have a look at the ancient probes (migrated from the big brother setup which was there several years before) and reduce the number of forks they do.
Tags: big brother, dbas, faults, oracle db, performance analysis, performance problems, performance requirements, shelf software, v890, zfs —