New users in Sol­aris 10 branded zones on Sol­aris 11 not handled auto­mat­ic­ally

A col­league no­ticed that on a Sol­aris 11 sys­tem a Sol­aris 10 branded zone “gains” two new dae­mons which are run­ning with UID 16 and 17. Those users are not auto­mat­ic­ally ad­ded to /​etc/​passwd, /​etc/​shadow (and /​etc/​group)… at least not when the zones are im­por­ted from an ex­ist­ing Sol­aris 10 zone.

I ad­ded the two users (net­adm, netcfg) and the group (net­adm) to the Sol­aris 10 branded zones by hand (copy&paste of the lines in /​etc/​passwd, /​etc/​shadow, /​etc/​group + run pw­conv) for our few Sol­aris 10 branded zones on Sol­aris 11.

StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

In­crease of DNS re­quests after a crit­ical patch up­date of Sol­aris 10

Some weeks ago we in­stalled crit­ical patch up­dates (CPU) on a Sol­aris 10 sys­tem (in­ternal sys­tem, a year of CPU to in­stall, noth­ing in it af­fect­ing us or was con­sidered a se­cur­ity risk, we de­cided to ap­ply this one re­gard­less to not fall be­hind too much). Af­ter­wards we no­ticed that two zones are do­ing a lot of DNS re­quests. We no­ticed this already be­fore the zones went into pro­duc­tion and we con­figured a pos­it­ive time to live in nscd.conf for “hosts”. Ad­di­tion­ally we no­ticed a lot of DNS re­quests for IPv6 ad­dresses (AAAA look­ups), while ab­so­lutely no IPv6 ad­dress is con­figured in the zones (not even for loc­al­host… and those are ex­clus­ive IP zones). Ap­par­ently with one of the patches in the CPU the be­ha­viour changed re­gard­ing the cach­ing, I am not sure if we had the AAAA look­ups be­fore.

Today I got some time to de­bug this. After adding cach­ing of “ipnodes” in ad­di­tion to “hosts” (and I con­figured a neg­at­ive time to live for both at the same time), the DNS re­quests came down to a sane amount.

For the AAAA look­ups I have not found a solu­tion. By my read­ing of the doc­u­ment­a­tion I would as­sume there are not IPv6 DNS look­ups if there is not IPv6 ad­dress con­figured.

Com­plete net­work loss on Sol­aris 10u10 CPU 2012-​10 on vir­tu­al­ized T4-​2

The prob­lem I see at work: A T4-​2 with 3 guest LDOMs, vir­tu­al­ized disks and net­works lost the com­plete net­work con­nectiv­ity “out of the blue” once, and maybe “sporadic” dir­ectly after a cold boot. After a lot of dis­cus­sion with Or­acle, I have the im­pres­sion that we have two prob­lems here.

1st prob­lem:
Total net­work loss of the ma­chine (no zone or guest LDOM or the primary LDOM was able to have re­ceive or send IP pack­ets). This happened once. No idea how to re­pro­duce it. In the logs we see the mes­sage “[ID 920994 kern.warning] WARNING: vnetX: ex­ceeded num­ber of per­mit­ted hand­shake at­tempts (5) on chan­nel xxx”. Ac­cord­ing to Or­acle this is sup­posed to be fixed in 148677 – 01 which will come with Sol­aris 10u11. They sug­ges­ted to use a vsw in­ter­face in­stead of a vnet in­ter­face on the primary do­main to at least lower the prob­ab­il­ity of this prob­lem hit­ting us. They were not able to tell us how to re­pro­duce the prob­lem (seems to be a race con­di­tion, at least I get this im­pres­sion based upon the de­scrip­tion of the Or­acle en­gin­eer hand­ling the SR). Only a re­boot helped to get the prob­lem solved. I was told we are the only cli­ent which re­por­ted this kind of prob­lem, the patch for this prob­lem is based upon an in­ternal bu­gre­port from in­ternal tests.

2nd prob­lem:
After cold boots some­times some ma­chines (not all) are not able to con­nect to an IP on the T4. A re­boot helps, as does re­mov­ing an in­ter­face from an ag­greg­ate and dir­ectly adding it again (see be­low for the sys­tem con­fig). To try to re­pro­duce the prob­lem, we did a lot of warm re­boots of the primary do­main, and the prob­lem never showed up. We did some cold re­boots, and the prob­lem showed up once.

In case someone else sees one of those prob­lems on his ma­chines too, please get in con­tact with me to see what we have in com­mon to try to track this down fur­ther and to share info which may help in maybe re­pro­du­cing the prob­lems.

Sys­tem setup:

  • T4-​2 with 4 HBAs and 8 NICs (4 * igb on-​board, 4 * nxge on ad­di­tional net­work card)
  • 3 guest LDOMs and one io+control do­main (both in the primary do­main)
  • the guest LDOMs use SAN disks over the 4 HBAs
  • the primary do­main uses a mirrored zpool on SSDs
  • 5 vswitch in the hy­per­visor
  • 4 ag­greg­ates (aggr1 – aggr4 with L2-​policy), each one with one igb and one nxge NIC
  • each ag­greg­ate is con­nec­ted to a sep­ar­ate vswitch (the 5th vswitch is for machine-​internal com­mu­nic­a­tion)
  • each guest LDOM has three vnets, each vnets con­nec­ted to a vswitch (1 guest LDOM has aggr1+2 only for zones (via vnets), 2 guest LDOMs have aggr 3+4 only for zones (via vnets), and all LDOMs have aggr2+3 (via vnets) for global-​zone com­mu­nic­a­tion, all LDOMs are ad­di­tion­ally con­nec­ted to the machine-​internal-​only vswitch via the 3rd vnet)
  • primary do­main uses 2 vnets con­nec­ted to the vswitch which is con­nec­ted to aggr2 and aggr3 (con­sist­ency with the other LDOMs on this ma­chine) and has no zones
  • this means each en­tity (primary do­main, guest LDOMs and each zone) has two vnets in and those two vnets are con­figured in a link-​based IPMP setup (vnet-linkprop=phys-state)
  • each vnet has VLAN tag­ging con­figured in the hy­per­visor (with the zones be­ing in dif­fer­ent VLANs than the LDOMs)

The pro­posed change by Or­acle is to re­place the 2 vnet in­ter­faces in the primary do­main with 2 vsw in­ter­faces (which means to do VLAN tag­ging in the primary do­main dir­ectly in­stead of in the vnet con­fig). To have IPMP work­ing this means to have vsw-linkprop=phys-state. We have two sys­tems with the same setup, on one sys­tem we already changed this and it is work­ing as be­fore. As we don’t know how to re­pro­duce the 1st prob­lem, we don’t know if the prob­lem is fixed or not, re­spect­ively what the prob­ab­il­ity is to get hit again by this prob­lem.

Ideas /​ sug­ges­tions /​ info wel­come.

Re­verse en­gin­eer­ing a 10 year old java pro­gram

Re­cently I star­ted to re­verse en­gin­eer a ~10 year old java pro­gram (that means it was writ­ten at about the same time when I touched java the first and the last time at the uni­ver­sity – not be­cause of an dis­like of java, but be­cause other pro­gram­ming lan­guages where more suit­able for the prob­lems at hand). Ac­tu­ally I am just re­verse en­gin­eer­ing the GUI ap­plet (the fron­tend) of a ser­vice. The vendor does not ex­ist any­more since about 10 years, the pro­gram was not taken over by someone else, and the sys­tem where it it used from needs to be up­dated. The prob­lem, it runs with JRE 1.3. With Java 5 we do not get er­ror mes­sages, but it does not work as it is sup­posed to be. With Java 6 we get a popup about some val­ues be­ing NULL or 0.

So, first step de­com­pil­ing all classes of the ap­plet. Second step com­pil­ing the res­ult for JRE 1.3 and test if it still works. Third step, modify it to run with Java 6 or 7. Fourth step, be happy.

Well, after de­com­pil­ing all classes I have now about 1450 source files (~1100 java source code files, the rest are pic­tures, prop­er­ties files and maybe other stuff). From ini­tially more than 4000 com­pile er­rors I am down to about 600. Well, that are only the com­pile er­rors. Bugs in the code (either put there by the de­com­piler, or by the pro­gram­mers which wrote this soft­ware) are still to be de­tec­ted. Un­for­tu­nately I don’t know if I can just com­pile a sub­set of all classes for Java 67 and let the rest be com­piled for Java 1.3, but I have a test en­vir­on­ment where I can play around.

Plan B (search­ing for a re­place­ment of the ap­plic­a­tion) re­gard­ing this is already in pro­gress in par­al­lel. We will see which solu­tion is faster.

Web­Sphere 7: solu­tion to “pass­word is not set” while there is a pass­word set

I googled a lot re­gard­ing the er­ror mes­sage “pass­word is not set” when test­ing a data­source in Web­Sphere (7.0.0.21), but I did not find a solu­tion. A co-​worker fi­nally found a solu­tion (by ac­ci­dent?).

Prob­lem case

While hav­ing the ap­plic­a­tion JVMs run­ning, I cre­ated a new JAAS-​J2C au­then­tic­ator (in my case the same lo­gin but a dif­fer­ent pass­word), and changed the data­source to use the new au­then­tic­ator. I saved the con­fig and syn­chron­ized it. The files config/​cells/​cell­name/​nodes/​node­name/resources.xml and config/​cells/​cell­name/​se­cur­ity.xml showed that the changes ar­rived on the node. Test­ing the data­source con­nectiv­ity fails now with:

DSRA8201W: Data­Source Con­fig­ur­a­tion: DSRA8040I: Failed to con­nect to the Data­Source.  En­countered java.sql.SQLException: The ap­plic­a­tion server re­jec­ted the con­nec­tion. (Pass­word is not set.)DSRA0010E: SQL State = 08004, Er­ror Code = –99,999.

Re­start­ing the ap­plic­a­tion JVMs does not help.

Solu­tion

After stop­ping everything (ap­plic­a­tion JVMs, nodeagent and de­ploy­ment man­ager) and start­ing everything again, the con­nec­tion test of the data­source works dir­ectly as ex­pec­ted.

I have not tested if it is enough to just stop all ap­plic­a­tion JVMs on one node and the cor­res­p­ding nodeagent, or if I really have to stop the de­ploy­ment man­ager too.