New users in Sol­aris 10 branded zones on Sol­aris 11 not handled auto­mat­ic­ally

A col­league no­ticed that on a Sol­aris 11 sys­tem a Sol­aris 10 branded zone “gains” two new dae­mons which are run­ning with UID 16 and 17. Those users are not auto­mat­ic­ally ad­ded to /​etc/​passwd, /​etc/​shadow (and /​etc/​group)… at least not when the zones are im­por­ted from an ex­ist­ing Sol­aris 10 zone.

I ad­ded the two users (net­adm, netcfg) and the group (net­adm) to the Sol­aris 10 branded zones by hand (copy&paste of the lines in /​etc/​passwd, /​etc/​shadow, /​etc/​group + run pw­conv) for our few Sol­aris 10 branded zones on Sol­aris 11.

StumbleUponXINGBalatarinBox.netDiggGoogle GmailNetvouzPlurkSiteJotTypePad PostYahoo BookmarksVKSlashdotPocketHacker NewsDiigoBuddyMarksRedditLinkedInBibSonomyBufferEmailHatenaLiveJournalNewsVinePrintViadeoYahoo MailAIMBitty BrowserCare2 NewsEvernoteMail.RuPrintFriendlyWaneloYahoo MessengerYoolinkWebnewsStumpediaProtopage BookmarksOdnoklassnikiMendeleyInstapaperFarkCiteULikeBlinklistAOL MailTwitterGoogle+PinterestTumblrAmazon Wish ListBlogMarksDZoneDeliciousFlipboardFolkdJamespotMeneameMixiOknotiziePushaSvejoSymbaloo FeedsWhatsAppYouMobdiHITTWordPressRediff MyPageOutlook.comMySpaceDesign FloatBlogger PostApp.netDiary.RuKindle ItNUjijSegnaloTuentiWykopTwiddlaSina WeiboPinboardNetlogLineGoogle BookmarksDiasporaBookmarks.frBaiduFacebookGoogle ClassroomKakaoQzoneSMSTelegramRenrenKnownYummlyShare/​Save

In­crease of DNS re­quests after a crit­ical patch up­date of Sol­aris 10

Some weeks ago we in­stalled crit­ical patch up­dates (CPU) on a Sol­aris 10 sys­tem (in­ternal sys­tem, a year of CPU to in­stall, noth­ing in it af­fect­ing us or was con­sidered a se­cur­ity risk, we de­cided to ap­ply this one re­gard­less to not fall be­hind too much). Af­ter­wards we no­ticed that two zones are do­ing a lot of DNS re­quests. We no­ticed this already be­fore the zones went into pro­duc­tion and we con­figured a pos­it­ive time to live in nscd.conf for “hosts”. Ad­di­tion­ally we no­ticed a lot of DNS re­quests for IPv6 ad­dresses (AAAA look­ups), while ab­so­lutely no IPv6 ad­dress is con­figured in the zones (not even for loc­al­host… and those are ex­clus­ive IP zones). Ap­par­ently with one of the patches in the CPU the be­ha­viour changed re­gard­ing the cach­ing, I am not sure if we had the AAAA look­ups be­fore.

Today I got some time to de­bug this. After adding cach­ing of “ipnodes” in ad­di­tion to “hosts” (and I con­figured a neg­at­ive time to live for both at the same time), the DNS re­quests came down to a sane amount.

For the AAAA look­ups I have not found a solu­tion. By my read­ing of the doc­u­ment­a­tion I would as­sume there are not IPv6 DNS look­ups if there is not IPv6 ad­dress con­figured.

Com­plete net­work loss on Sol­aris 10u10 CPU 2012-​10 on vir­tu­al­ized T4-​2

The prob­lem I see at work: A T4-​2 with 3 guest LDOMs, vir­tu­al­ized disks and net­works lost the com­plete net­work con­nectiv­ity “out of the blue” once, and maybe “sporadic” dir­ectly after a cold boot. After a lot of dis­cus­sion with Or­acle, I have the im­pres­sion that we have two prob­lems here.

1st prob­lem:
Total net­work loss of the ma­chine (no zone or guest LDOM or the primary LDOM was able to have re­ceive or send IP pack­ets). This happened once. No idea how to re­pro­duce it. In the logs we see the mes­sage “[ID 920994 kern.warning] WARNING: vnetX: ex­ceeded num­ber of per­mit­ted hand­shake at­tempts (5) on chan­nel xxx”. Ac­cord­ing to Or­acle this is sup­posed to be fixed in 148677 – 01 which will come with Sol­aris 10u11. They sug­ges­ted to use a vsw in­ter­face in­stead of a vnet in­ter­face on the primary do­main to at least lower the prob­ab­il­ity of this prob­lem hit­ting us. They were not able to tell us how to re­pro­duce the prob­lem (seems to be a race con­di­tion, at least I get this im­pres­sion based upon the de­scrip­tion of the Or­acle en­gin­eer hand­ling the SR). Only a re­boot helped to get the prob­lem solved. I was told we are the only cli­ent which re­por­ted this kind of prob­lem, the patch for this prob­lem is based upon an in­ternal bu­gre­port from in­ternal tests.

2nd prob­lem:
After cold boots some­times some ma­chines (not all) are not able to con­nect to an IP on the T4. A re­boot helps, as does re­mov­ing an in­ter­face from an ag­greg­ate and dir­ectly adding it again (see be­low for the sys­tem con­fig). To try to re­pro­duce the prob­lem, we did a lot of warm re­boots of the primary do­main, and the prob­lem never showed up. We did some cold re­boots, and the prob­lem showed up once.

In case someone else sees one of those prob­lems on his ma­chines too, please get in con­tact with me to see what we have in com­mon to try to track this down fur­ther and to share info which may help in maybe re­pro­du­cing the prob­lems.

Sys­tem setup:

  • T4-​2 with 4 HBAs and 8 NICs (4 * igb on-​board, 4 * nxge on ad­di­tional net­work card)
  • 3 guest LDOMs and one io+control do­main (both in the primary do­main)
  • the guest LDOMs use SAN disks over the 4 HBAs
  • the primary do­main uses a mirrored zpool on SSDs
  • 5 vswitch in the hy­per­visor
  • 4 ag­greg­ates (aggr1 – aggr4 with L2-​policy), each one with one igb and one nxge NIC
  • each ag­greg­ate is con­nec­ted to a sep­ar­ate vswitch (the 5th vswitch is for machine-​internal com­mu­nic­a­tion)
  • each guest LDOM has three vnets, each vnets con­nec­ted to a vswitch (1 guest LDOM has aggr1+2 only for zones (via vnets), 2 guest LDOMs have aggr 3+4 only for zones (via vnets), and all LDOMs have aggr2+3 (via vnets) for global-​zone com­mu­nic­a­tion, all LDOMs are ad­di­tion­ally con­nec­ted to the machine-​internal-​only vswitch via the 3rd vnet)
  • primary do­main uses 2 vnets con­nec­ted to the vswitch which is con­nec­ted to aggr2 and aggr3 (con­sist­ency with the other LDOMs on this ma­chine) and has no zones
  • this means each en­tity (primary do­main, guest LDOMs and each zone) has two vnets in and those two vnets are con­figured in a link-​based IPMP setup (vnet-linkprop=phys-state)
  • each vnet has VLAN tag­ging con­figured in the hy­per­visor (with the zones be­ing in dif­fer­ent VLANs than the LDOMs)

The pro­posed change by Or­acle is to re­place the 2 vnet in­ter­faces in the primary do­main with 2 vsw in­ter­faces (which means to do VLAN tag­ging in the primary do­main dir­ectly in­stead of in the vnet con­fig). To have IPMP work­ing this means to have vsw-linkprop=phys-state. We have two sys­tems with the same setup, on one sys­tem we already changed this and it is work­ing as be­fore. As we don’t know how to re­pro­duce the 1st prob­lem, we don’t know if the prob­lem is fixed or not, re­spect­ively what the prob­ab­il­ity is to get hit again by this prob­lem.

Ideas /​ sug­ges­tions /​ info wel­come.

Strange per­form­ance prob­lem with the IBM HTTP Server (mod­i­fied apache)

Re­cently we had a strange per­form­ance prob­lem at work. A web ap­plic­a­tion was hav­ing slow re­sponse times from time to time and users com­plained. We did not see an un­com­mon CPU/​mem/​swap us­age on any in­volved ma­chine. I gen­er­ated heat-​maps from per­form­ance meas­ure­ments and there where no ob­vi­ous traces of slow be­ha­vior. We did not find any reason why the ap­plic­a­tion should be slow for cli­ents, but ob­vi­ously it was.

Then someone men­tioned two re­cent apache DoS prob­lems. Num­ber one – the cookie hash is­sue – did not seem to be the cause, we did not see a huge CPU or memory con­sump­tion which we would ex­pect to see with such an at­tack. The second one – the slow reads prob­lem (no max con­nec­tion dur­a­tion timeout in apache, can be ex­ploited by a small re­ceive win­dow for TCP) – looked like it could be an is­sue. The slow read DoS prob­lem can be de­tec­ted by look­ing at the server-​status page.

What you would see on the server-​status page are a lot of worker threads in the ‘W’ (write data) state. This is sup­posed to be an in­dic­a­tion of slow reads. We did see this.

As our site is be­hind a re­verse proxy with some kind of IDS/​IPS fea­ture, we took the re­verse proxy out of the pic­ture to get a bet­ter view of who is do­ing what (we do not have X-​Forwarded-​For con­figured).

At this point we no­ticed still a lot of con­nec­tion in the ‘W’ state from the rev-​proxy. This was strange, it was not sup­posed to do this. After re­start­ing the rev-​proxy (while the cli­ents went dir­ectly to the web­serv­ers) we had those ‘W’ entries still in the server-​status. This was get­ting really strange. And to add to this, the dur­a­tion of the ‘W’ state from the rev-​proxy tells that this state is act­ive since sev­eral thou­sand seconds. Ugh. WTF?

Ok, next step: killing the of­fend­ers. First I veri­fied in the list of con­nec­tions in the server-​status (extended-​status is ac­tiv­ated) that all worker threads with the rev-​proxy con­nec­tion of a given PID are in this strange state and no cli­ent re­quest is act­ive. Then I killed this par­tic­u­lar PID. I wanted to do this un­til I do not have those strange con­nec­tions any­more. Un­for­tu­nately I ar­rived at PIDs which were lis­ted in the server-​status (even after a re­fresh), but not avail­able in the OS. That is bad. Very bad.

So the next step was to move all cli­ents away from one web­server, and then to re­boot this web­server com­pletely to be sure the en­tire sys­tem is in a known good state for fu­ture mon­it­or­ing (the big ham­mer ap­proach).

As we did not know if this strange state was due to some kind of mis-​administration of the sys­tem or not, we de­cided to have the rev-​proxy again in front of the web­server and to mon­itor the sys­tems.

We sur­vived about one and a half day. After that all worker threads on all web­serv­ers where in this state. DoS. At this point we where sure there was some­thing ma­li­cious go­ing on (some days later our man­age­ment showed us a mail from a com­pany which offered se­cur­ity con­sult­ing 2 months be­fore to make sure we do not get hit by a DDoS dur­ing the hol­i­day sea­son… a co­in­cid­ence?).

Next step, veri­fic­a­tion of miss­ing se­cur­ity patches (un­for­tu­nately it is not us who de­cides which patches we ap­ply to the sys­tems). What we no­ticed is, that the rev-​proxy is miss­ing a patch for a DoS prob­lem, and for the web­serv­ers a new fix­pack was sched­uled to be re­leased not far in the fu­ture (as of this writ­ing: it is avail­able now).

Since we ap­plied the DoS fix for the rev-​proxy, we do not have a prob­lem any­more. This is not really con­clus­ive, as we do not really know if this fixed the prob­lem or if the at­tacker stopped at­tack­ing us.

From read­ing what the DoS patch fixes, we would as­sume we should see some con­tinu­ous traffic go­ing on between the rev-​rpoxy and the web­server, but there was noth­ing when we ob­served the strange state.

We are still not al­lowed to ap­ply patches as we think we should do, but at least we have a bet­ter mon­it­or­ing in place to watch out for this par­tic­u­lar prob­lem (ac­tiv­ate the ex­ten­ded status in apache/​IHS, look for lines with state ‘W’ and a long dur­a­tion (column ‘SS’), raise an alert if the dur­a­tion is higher than the max. possible/​expected/​desired dur­a­tion for all pos­sible URLs).

For­cing a route in Sol­aris?

I have a little prob­lem find­ing a clean solu­tion to the fol­low­ing prob­lem.

A ma­chine with two net­work in­ter­faces and no de­fault route. The first in­ter­face gets an IP at boot time and the cor­res­pond­ing static route is in­ser­ted dur­ing boot into the rout­ing table without prob­lems. The second in­ter­face only gets an IP ad­dress when the shared-​IP zones on the ma­chine are star­ted, dur­ing boot the in­ter­face is plumbed but without any ad­dress. The net­works on those in­ter­faces are not con­nec­ted and the ma­chine is not a gate­way (this means we have a machine-​administration net­work and a production-​network). The static routes we want to have for the ad­dresses of the zones are not ad­ded to the rout­ing table, be­cause the next hop is not reach­able at the time the routing-​setup is done. As soon as the zones are up (and the in­ter­face gets an IP), a re-​run of the routing-​setup adds the miss­ing static routes.

Un­for­tu­nately I can not tell Sol­aris to keep the static route even if the next hop is not reach­able ATM (at least I have not found an op­tion to the route com­mand which does this).

One solu­tion to this prob­lem would be to add an ad­dress at boot to the in­ter­face which does not have an ad­dress at boot-​time ATM (prob­ably with the de­prec­ated flag set). The prob­lem is, that this sub­net (/​28) has not enough free ad­dresses any­more, so this is not an op­tion.

An­other solu­tion is to use a script which re-​runs the routing-​setup after the zones are star­ted. This is a prag­matic solu­tion, but not a clean solu­tion.

As I un­der­stand the in.routed man-​page in.routed is not an op­tion with the de­fault con­fig, be­cause the ma­chine shall not route between the net­works, and shall not change the rout­ing based upon RIP mes­sages from other ma­chines. Un­for­tu­nately I do not know enough about it to be sure, and I do not get the time to play around with this. I have seen some in­ter­st­ing op­tions re­gard­ing this in the man-​page, but play­ing around with this and sniff­ing the net­work to see what hap­pens, is not an op­tion ATM. Any­one with a config/​tutorial for this “do not broad­cast any­thing, do not ac­cept any­thing from outside”-case (if pos­sible)?