Alexander Leidinger

Just another weblog

Jan
15

Com­plete net­work loss on Solaris 10u10 CPU 2012-10 on vir­tu­al­ized T4-2

The prob­lem I see at work: A T4-2 with 3 guest LDOMs, vir­tu­al­ized disks and net­works lost the com­plete net­work con­nec­tiv­ity “out of the blue” once, and maybe “spo­radic” directly after a cold boot. After a lot of dis­cus­sion with Ora­cle, I have the impres­sion that we have two prob­lems here.

1st prob­lem:
Total net­work loss of the machine (no zone or guest LDOM or the pri­mary LDOM was able to have receive or send IP pack­ets). This hap­pened once. No idea how to repro­duce it. In the logs we see the mes­sage “[ID 920994 kern.warning] WARNING: vnetX: exceeded num­ber of per­mit­ted hand­shake attempts (5) on chan­nel xxx”. Accord­ing to Ora­cle this is sup­posed to be fixed in 148677 – 01 which will come with Solaris 10u11. They sug­gested to use a vsw inter­face instead of a vnet inter­face on the pri­mary domain to at least lower the prob­a­bil­ity of this prob­lem hit­ting us. They were not able to tell us how to repro­duce the prob­lem (seems to be a race con­di­tion, at least I get this impres­sion based upon the descrip­tion of the Ora­cle engi­neer han­dling the SR). Only a reboot helped to get the prob­lem solved. I was told we are the only client which reported this kind of prob­lem, the patch for this prob­lem is based upon an inter­nal bugre­port from inter­nal tests.

2nd prob­lem:
After cold boots some­times some machines (not all) are not able to con­nect to an IP on the T4. A reboot helps, as does remov­ing an inter­face from an aggre­gate and directly adding it again (see below for the sys­tem con­fig). To try to repro­duce the prob­lem, we did a lot of warm reboots of the pri­mary domain, and the prob­lem never showed up. We did some cold reboots, and the prob­lem showed up once.

In case some­one else sees one of those prob­lems on his machines too, please get in con­tact with me to see what we have in com­mon to try to track this down fur­ther and to share info which may help in maybe repro­duc­ing the problems.

Sys­tem setup:

  • T4-2 with 4 HBAs and 8 NICs (4 * igb on-board, 4 * nxge on addi­tional net­work card)
  • 3 guest LDOMs and one io+control domain (both in the pri­mary domain)
  • the guest LDOMs use SAN disks over the 4 HBAs
  • the pri­mary domain uses a mir­rored zpool on SSDs
  • 5 vswitch in the hypervisor
  • 4 aggre­gates (aggr1 — aggr4 with L2-policy), each one with one igb and one nxge NIC
  • each aggre­gate is con­nected to a sep­a­rate vswitch (the 5th vswitch is for machine-internal communication)
  • each guest LDOM has three vnets, each vnets con­nected to a vswitch (1 guest LDOM has aggr1+2 only for zones (via vnets), 2 guest LDOMs have aggr 3+4 only for zones (via vnets), and all LDOMs have aggr2+3 (via vnets) for global-zone com­mu­ni­ca­tion, all LDOMs are addi­tion­ally con­nected to the machine-internal-only vswitch via the 3rd vnet)
  • pri­mary domain uses 2 vnets con­nected to the vswitch which is con­nected to aggr2 and aggr3 (con­sis­tency with the other LDOMs on this machine) and has no zones
  • this means each entity (pri­mary domain, guest LDOMs and each zone) has two vnets in and those two vnets are con­fig­ured in a link-based IPMP setup (vnet-linkprop=phys-state)
  • each vnet has VLAN tag­ging con­fig­ured in the hyper­vi­sor (with the zones being in dif­fer­ent VLANs than the LDOMs)

The pro­posed change by Ora­cle is to replace the 2 vnet inter­faces in the pri­mary domain with 2 vsw inter­faces (which means to do VLAN tag­ging in the pri­mary domain directly instead of in the vnet con­fig). To have IPMP work­ing this means to have vsw-linkprop=phys-state. We have two sys­tems with the same setup, on one sys­tem we already changed this and it is work­ing as before. As we don’t know how to repro­duce the 1st prob­lem, we don’t know if the prob­lem is fixed or not, respec­tively what the prob­a­bil­ity is to get hit again by this problem.

Ideas / sug­ges­tions / info welcome.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Aug
10

Reverse engi­neer­ing a 10 year old java program

Recently I started to reverse engi­neer a ~10 year old java pro­gram (that means it was writ­ten at about the same time when I touched java the first and the last time at the uni­ver­sity — not because of an dis­like of java, but because other pro­gram­ming lan­guages where more suit­able for the prob­lems at hand). Actu­ally I am just reverse engi­neer­ing the GUI applet (the fron­tend) of a ser­vice. The ven­dor does not exist any­more since about 10 years, the pro­gram was not taken over by some­one else, and the sys­tem where it it used from needs to be updated. The prob­lem, it runs with JRE 1.3. With Java 5 we do not get error mes­sages, but it does not work as it is sup­posed to be. With Java 6 we get a popup about some val­ues being NULL or 0.

So, first step decom­pil­ing all classes of the applet. Sec­ond step com­pil­ing the result for JRE 1.3 and test if it still works. Third step, mod­ify it to run with Java 6 or 7. Fourth step, be happy.

Well, after decom­pil­ing all classes I have now about 1450 source files (~1100 java source code files, the rest are pic­tures, prop­er­ties files and maybe other stuff). From ini­tially more than 4000 com­pile errors I am down to about 600. Well, that are only the com­pile errors. Bugs in the code (either put there by the decom­piler, or by the pro­gram­mers which wrote this soft­ware) are still to be detected. Unfor­tu­nately I don’t know if I can just com­pile a sub­set of all classes for Java 6/7 and let the rest be com­piled for Java 1.3, but I have a test envi­ron­ment where I can play around.

Plan B (search­ing for a replace­ment of the appli­ca­tion) regard­ing this is already in progress in par­al­lel. We will see which solu­tion is faster.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,
Jul
06

Web­Sphere 7: solu­tion to “pass­word is not set” while there is a pass­word set

I googled a lot regard­ing the error mes­sage “pass­word is not set” when test­ing a data­source in Web­Sphere (7.0.0.21), but I did not find a solu­tion. A co-worker finally found a solu­tion (by accident?).

Prob­lem case

While hav­ing the appli­ca­tion JVMs run­ning, I cre­ated a new JAAS-J2C authen­ti­ca­tor (in my case the same login but a dif­fer­ent pass­word), and changed the data­source to use the new authen­ti­ca­tor. I saved the con­fig and syn­chro­nized it. The files config/cells/cell­name/nodes/node­name/resources.xml and config/cells/cell­name/secu­rity.xml showed that the changes arrived on the node. Test­ing the data­source con­nec­tiv­ity fails now with:

DSRA8201W: Data­Source Con­fig­u­ra­tion: DSRA8040I: Failed to con­nect to the Data­Source.  Encoun­tered java.sql.SQLException: The appli­ca­tion server rejected the con­nec­tion. (Pass­word is not set.)DSRA0010E: SQL State = 08004, Error Code = –99,999.

Restart­ing the appli­ca­tion JVMs does not help.

Solu­tion

After stop­ping every­thing (appli­ca­tion JVMs, nodeagent and deploy­ment man­ager) and start­ing every­thing again, the con­nec­tion test of the data­source works directly as expected.

I have not tested if it is enough to just stop all appli­ca­tion JVMs on one node and the cor­re­spding nodeagent, or if I really have to stop the deploy­ment man­ager too.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,
Jan
24

Strange per­for­mance prob­lem with the IBM HTTP Server (mod­i­fied apache)

Recently we had a strange per­for­mance prob­lem at work. A web appli­ca­tion was hav­ing slow response times from time to time and users com­plained. We did not see an uncom­mon CPU/mem/swap usage on any involved machine. I gen­er­ated heat-maps from per­for­mance mea­sure­ments and there where no obvi­ous traces of slow behav­ior. We did not find any rea­son why the appli­ca­tion should be slow for clients, but obvi­ously it was.

Then some­one men­tioned two recent apache DoS prob­lems. Num­ber one — the cookie hash issue — did not seem to be the cause, we did not see a huge CPU or mem­ory con­sump­tion which we would expect to see with such an attack. The sec­ond one — the slow reads prob­lem (no max con­nec­tion dura­tion time­out in apache, can be exploited by a small receive win­dow for TCP) — looked like it could be an issue. The slow read DoS prob­lem can be detected by look­ing at the server-status page.

What you would see on the server-status page are a lot of worker threads in the ‘W’ (write data) state. This is sup­posed to be an indi­ca­tion of slow reads. We did see this.

As our site is behind a reverse proxy with some kind of IDS/IPS fea­ture, we took the reverse proxy out of the pic­ture to get a bet­ter view of who is doing what (we do not have X-Forwarded-For configured).

At this point we noticed still a lot of con­nec­tion in the ‘W’ state from the rev-proxy. This was strange, it was not sup­posed to do this. After restart­ing the rev-proxy (while the clients went directly to the web­servers) we had those ‘W’ entries still in the server-status. This was get­ting really strange. And to add to this, the dura­tion of the ‘W’ state from the rev-proxy tells that this state is active since sev­eral thou­sand sec­onds. Ugh. WTF?

Ok, next step: killing the offend­ers. First I ver­i­fied in the list of con­nec­tions in the server-status (extended-status is acti­vated) that all worker threads with the rev–proxy con­nec­tion of a given PID are in this strange state and no client request is active. Then I killed this par­tic­u­lar PID. I wanted to do this until I do not have those strange con­nec­tions any­more. Unfor­tu­nately I arrived at PIDs which were listed in the server-status (even after a refresh), but not avail­able in the OS. That is bad. Very bad.

So the next step was to move all clients away from one web­server, and then to reboot this web­server com­pletely to be sure the entire sys­tem is in a known good state for future mon­i­tor­ing (the big ham­mer approach).

As we did not know if this strange state was due to some kind of mis-administration of the sys­tem or not, we decided to have the rev-proxy again in front of the web­server and to mon­i­tor the systems.

We sur­vived about one and a half day. After that all worker threads on all web­servers where in this state. DoS. At this point we where sure there was some­thing mali­cious going on (some days later our man­age­ment showed us a mail from a com­pany which offered secu­rity con­sult­ing 2 months before to make sure we do not get hit by a DDoS dur­ing the hol­i­day sea­son… a coincidence?).

Next step, ver­i­fi­ca­tion of miss­ing secu­rity patches (unfor­tu­nately it is not us who decides which patches we apply to the sys­tems). What we noticed is, that the rev-proxy is miss­ing a patch for a DoS prob­lem, and for the web­servers a new fix­pack was sched­uled to be released not far in the future (as of this writ­ing: it is avail­able now).

Since we applied the DoS fix for the rev-proxy, we do not have a prob­lem any­more. This is not really con­clu­sive, as we do not really know if this fixed the prob­lem or if the attacker stopped attack­ing us.

From read­ing what the DoS patch fixes, we would assume we should see some con­tin­u­ous traf­fic going on between the rev-rpoxy and the web­server, but there was noth­ing when we observed the strange state.

We are still not allowed to apply patches as we think we should do, but at least we have a bet­ter mon­i­tor­ing in place to watch out for this par­tic­u­lar prob­lem (acti­vate the extended sta­tus in apache/IHS, look for lines with state ‘W’ and a long dura­tion (col­umn ‘SS’), raise an alert if the dura­tion is higher than the max. possible/expected/desired dura­tion for all pos­si­ble URLs).

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,
Oct
04

IBM HTTP Server (7) and Verisign Inter­me­di­ate Certificates

I was fight­ing with the right way to add a recent Verisign cer­tifi­cate to a key­store for the IBM HTTP Server (IHS). I have used the ikey­man util­ity on Solaris.

The prob­lem indi­ca­tor was the error mes­sageSSL0208E: SSL Hand­shake Failed, Cer­tifi­cate val­i­da­tion error” in the SSL log of IHS.

The IBM web­sites where not really help­ful to track down the prob­lem (the miss­ing stuff). The Verisign instruc­tions did not lead to a work­ing solu­tion either.

What was done before: the Verisign Inter­me­di­ate Cer­tifi­cates where imported as “Signer Cer­tifi­cates”, and the cer­tifi­cate for the web­server was imported within “Per­sonal Cer­tifi­cates”. With­out the signer cer­tifi­cates the per­sonal cer­tifi­cate would not import due to an inter­me­di­ate cer­tifi­cated miss­ing (no valid trust-chain).

What I did to resolve the problem:

  •  I removed all Verisign certificates.
  •  I added the Verisign Root Cer­tifi­cate and the Verisign Inter­me­di­ate Cer­tifi­cate A as a signer cer­tifi­cate (use the “Add” but­ton). I also tried to add the Verisign Inter­me­di­ate Cer­tifi­cate B, but it com­plained that some part of it was already there as part of the Inter­me­di­ate Cer­tifi­cate A. I skipped this part.
  •  Then I con­verted the server cer­tifi­cate and key to a PKS12 file via “openssl pkcs12 –export –in server-cert.arm –out cert-for-ihs.p12 –inkey server-key.arm –name name_for_cert_in_ihs”.
  • After that I imported the cert-for-ihs.p12 as a “Per­sonal Cer­tifi­cate”. The import dia­log offers 3 items to import. I selected the “name_for_cert_in_ihs” and the one con­tain­ing “cn=verisign class 3 pub­lic pri­mary cer­ti­fi­ca­tion author­ity — g5” (when I selected the 3rd one too, it com­plained that a part of it was already imported with a dif­fer­ent name).

With this mod­i­fied key­store in place, I just had to select the cer­tifi­cate via “SSLServerCert name_for_cert_in_ihs” in the IHS con­fig and the prob­lem was fixed.

GD Star Rat­ing
load­ing…
GD Star Rat­ing
load­ing…
Share

Tags: , , , , , , , , ,