Bug 322661
Summary: | NETWORK issue from Egenera pBlade | ||
---|---|---|---|
Product: | [Retired] Red Hat Hardware Certification Program | Reporter: | Xu Bo <bxu> |
Component: | Test Suite (tests) | Assignee: | Greg Nichols <gnichols> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 5 | CC: | gcase, ykun |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
URL: | https://hardware.redhat.com/show.cgi?id=317391 | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2007-12-17 15:49:03 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Xu Bo
2007-10-08 02:55:52 UTC
Forcibly setting the interface speed to 100 will work around the problem, so it looks like all we need is better error handling (and then working with the vendor to correct the real problem). def getInterfaceSpeed(self): self.interfaceSpeed = 100 return #skip the rest of this, trying to work around bad #data returning from ethtool for interfaceString in (self.interface, "p%s" % self.interface): ethtoolCommand = "ethtool %s | fgrep \"Speed\"" % interfaceString pipe = os.popen(ethtoolCommand) line = pipe.readline() pipe.close() if line: pattern = re.compile("\d+") match = pattern.search(line) if match: self.interfaceSpeed = string.atoi(match.group()) return # otherwise self.interfaceSpeed = 100 print "interface speed is %u" % self.interfaceSpeed Fixed in R4 This is the patch I used to get around the speed detection: diff -Naurp network.py.orig network.py --- network.py.orig 2007-10-05 18:21:52.000000000 -0400 +++ network.py 2007-10-05 18:23:58.000000000 -0400 @@ -193,6 +193,10 @@ class NetworkTest(Test): return returnValue == 0 def getInterfaceSpeed(self): + self.interfaceSpeed = 100 + return + #skip the rest of this, trying to work around bad + #data returning from ethtool for interfaceString in (self.interface, "p%s" % self.interface): ethtoolCommand = "ethtool %s | fgrep \"Speed\"" % interfaceString pipe = os.popen(ethtoolCommand) Hm... There's something else going on here, but I don't know what it is yet. After going through the ethtool output, I found that all Xen dom0s I tested (i686, x86_64, ia64 and this Egenera i686 blade) output link status and nothing else. Link speed is never displayed. This means my original hypothesis was wrong. That being the case, you'd think that the forced self.interfaceSpeed = 100 line wouldn't be necessary, as no system reports speed correctly in the scripts. But on these blades, if you use the unmodified script, the dd command that creates /var/www/html/httptest.file never stops, eventually filling up the entire hard drive. So, I put the patch back in and that portion of the test runs. Shortly after that portion of the test finishes, the system begins the NFS test and immediately displays errors that are not present when run with the PAE kernel: nfs: server 172.30.192.193 not responding, still trying This repeats over and over until the test is killed. After some trial and error, I determined that the problem is caused by the mount protocol. If you use UDP, mounts hang after issuing a simple 'ls' command. If you switch to TCP, commands work as expected. After modifying the network.py script further, changing the protocol on the nfsopts line to tcp: nfsopts="rw,intr,rsize=12288,wsize=12288,tcp" I can obtain a successful run of the NETWORK test. Now we need to determine what is causing dd to run out of control and why UDP mounts are unsuccessful. A little background: The reason the network test uses NFS mounted via UDP is to test UDP. Well then, I would say that it's definitely doing its job. I wonder what's so different about Egenera's architecture that would cause UDP traffic to fail? I'm waiting to hear more from Egenera. |