Bug 222690

Summary: HTS: hts network test failed on blades
Product: [Retired] Red Hat Hardware Certification Program Reporter: Hien Nguyen <hien1>
Component: Test Suite (tests)Assignee: Greg Nichols <gnichols>
Status: CLOSED NOTABUG QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: 5CC: markwiz, sglass, wwlinuxengineering
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: ppc64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2007-02-06 13:40:30 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
network test output of run3
none
network test output of run4
none
hal-device output
none
hts submit log file
none
network test after disable eth0
none
results from a new run - from IBM none

Description Hien Nguyen 2007-01-15 19:06:32 UTC
Description of problem:
Network test on blade systems (JS20, JS21) failed 

Version-Release number of selected component (if applicable):
HTS-5.0-15

How reproducible:
run network test on blades

Steps to Reproduce:
1.Installed RHEL5 B2 snapshot 6 on SUT and Network Server
2.On Network Server: hts server start
3.On SUT: hts discover; hts plan;
hts vertify --test network --server 10.1.1.9
  
Actual results:
hts print --last shows one NIC failed
Run: 4 on 2007-01-15 03:39:17
--------------------------------------------
Tests: 7 planned,  3 run, 2 passed, 1 failed
--------------------------------------------


Test Run 4
----------------------------------------------------------------
core       Computer                             -
memory     Computer                             -
info       Computer                             - PASS
usb        Computer                             -
network    Networking Interface                 - PASS
network    Networking Interface                 - FAIL
storage    TOSHIBA MK4019GAXB                   -
---------
output.log:

[root@bl2-14 ~]# cat /var/log/hts/runs/4/network/output.log
Running ./network.py:
1000+0 records in
1000+0 records out
131072000 bytes (131 MB) copied, 36.5643 seconds, 3.6 MB/s
connect: No route to host
connect: No route to host
using test server 10.1.1.9
Shutting down all interfaces...
    interface: eth1

Interface eth0 is down
waiting...
done
Bringing up interface eth0
Checking...
Interface eth0 is RUNNING
Checking via ping to 10.1.1.9...
ip address: 10.1.1.50
tcp test:
test cycle 1 of 5
testing latency...
testing bandwidth to 10.1.1.9...
FAILED (0)
Restoring all interfaces...
Network test on device eth0 failed
...finished running ./network.py, exit code=1
-----------
Expected results:
Both NICs should PASS

Additional info:

On JS20, eth0 is used for Blade Center to access outbound. eth1 is configured 
for the blade itself. I need the HTS to be modified to have a command line to 
test individual NIC at a time.

Comment 1 Hien Nguyen 2007-01-15 19:06:32 UTC
Created attachment 145606 [details]
network test output of run3

Comment 2 Hien Nguyen 2007-01-15 19:13:42 UTC
Created attachment 145610 [details]
network test output of run4

Comment 3 Hien Nguyen 2007-01-15 19:14:32 UTC
Created attachment 145611 [details]
hal-device output

Comment 4 Hien Nguyen 2007-01-15 19:47:33 UTC
Created attachment 145617 [details]
hts submit log file

Comment 5 Greg Nichols 2007-01-15 21:05:15 UTC
*** Bug 222691 has been marked as a duplicate of this bug. ***

Comment 6 Greg Nichols 2007-01-15 21:13:41 UTC
HTS allows individual device tests to be disabled.

To disable eth0:
> hts plan --udi /org/freedesktop/Hal/devices/net_00_0d_60_1e_80_0e --disable

To disable eth1:
> hts plan --udi /org/freedesktop/Hal/devices/net_00_0d_60_1e_80_0f --disable

Note: UDI can be found from either hal-devices or the Device Manager UI advanced
tab.

Comment 7 Hien Nguyen 2007-01-16 19:20:23 UTC
Created attachment 145718 [details]
network test after disable  eth0

Comment 8 Hien Nguyen 2007-01-16 19:21:43 UTC
I disabled eth0 and ran network test, but hts still brought down all network 
then brought eth0 up and test both eth0 and eth1. It resuts in network test on 
both NICs failed.

See attachment output-run6.log


Comment 9 Greg Nichols 2007-01-18 21:51:45 UTC
Why is shutting down eth0 a problem?   Are you running the test
via SSH though eth0?

The logs seem to indicate that the lmbench services were not 
reachable on the test server.   You can test this manually by:

lat_tcp <test server ip>



Comment 10 Hien Nguyen 2007-01-18 23:23:58 UTC
Let me configure both eth0 and eth1 and run hts network test and report the log
to you.


Comment 11 Hien Nguyen 2007-01-18 23:41:52 UTC
I didn't use ssh to access to machine.I access the blade center and open a
console on the blade

[root@bl3-13 ~]# lat_tcp 10.1.1.9
TCP latency using 10.1.1.9: 0.6680 microseconds

[root@bl3-13 ~]# hts certify --test network --noinfo
loaded configuration /var/hts/config.xml
loaded plan /var/hts/plan.xml
loaded results /var/hts/results.xml
running only test network
running network on /org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_97
mkdir -p /tmp/hts-network-FnZkik/mnt/tests/HTS/hts/network
cp -a testinfo.desc runtest.sh network.py Makefile
/tmp/hts-network-FnZkik/mnt/tests/HTS/hts/network
install -m 0755 runtest.sh /tmp/hts-network-FnZkik/mnt/tests/HTS/hts/network
make OUTPUTFILE=/var/log/hts/runs/1/network/output.log RUNMODE=normal
UDI=/org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_97 run
chmod a+x ./runtest.sh ./network.py
./runtest.sh
/tmp/hts-network-FnZkik/mnt/tests/HTS/hts/network/network.py
Running ./network.py:
Error: No test server was set, test abort
...finished running ./network.py, exit code=1
recovered exit code=1
rhts-report-result /HTS/hts/network FAIL /var/log/hts/runs/1/network/output.log
saveOutput: /var/log/hts/runs/1/network/output.log
Return value was 0
running network on /org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_96
mkdir -p /tmp/hts-network-jTQaPr/mnt/tests/HTS/hts/network
cp -a testinfo.desc runtest.sh network.py Makefile
/tmp/hts-network-jTQaPr/mnt/tests/HTS/hts/network
install -m 0755 runtest.sh /tmp/hts-network-jTQaPr/mnt/tests/HTS/hts/network
make OUTPUTFILE=/var/log/hts/runs/1/network/output.log RUNMODE=normal
UDI=/org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_96 run
chmod a+x ./runtest.sh ./network.py
./runtest.sh
/tmp/hts-network-jTQaPr/mnt/tests/HTS/hts/network/network.py
Running ./network.py:
Error: No test server was set, test abort
...finished running ./network.py, exit code=1
recovered exit code=1
rhts-report-result /HTS/hts/network FAIL /var/log/hts/runs/1/network/output.log
saveOutput: /var/log/hts/runs/1/network/output.log
Return value was 0
Opening virtualization test results /var/hts/virt-results.xml
No virtualization test results were found (virt-results.xml)
[root@bl3-13 ~]# hts certify --test network --server 10.1.1.9 --noinfo
loaded configuration /var/hts/config.xml
loaded plan /var/hts/plan.xml
loaded results /var/hts/results.xml
running only test network
running network on /org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_97
mkdir -p /tmp/hts-network-EjNTwT/mnt/tests/HTS/hts/network
cp -a testinfo.desc runtest.sh network.py Makefile
/tmp/hts-network-EjNTwT/mnt/tests/HTS/hts/network
install -m 0755 runtest.sh /tmp/hts-network-EjNTwT/mnt/tests/HTS/hts/network
make OUTPUTFILE=/var/log/hts/runs/2/network/output.log RUNMODE=normal
UDI=/org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_97 TESTSERVER=10.1.1.9 run
chmod a+x ./runtest.sh ./network.py
./runtest.sh
/tmp/hts-network-EjNTwT/mnt/tests/HTS/hts/network/network.py
Running ./network.py:
1000+0 records in
1000+0 records out
131072000 bytes (131 MB) copied, 36.262 seconds, 3.6 MB/s
TCP latency using 10.1.1.9: 0.6679 microseconds
socket connection: Connection refused
using test server 10.1.1.9
Shutting down all interfaces...
    interface: eth0
 
    interface: eth1
 
Interface eth1 is down
waiting...
done
Bringing up interface eth1
Checking...
Interface eth1 is RUNNING
Checking via ping to 10.1.1.9...
  done
ip address: 10.1.1.60
tcp test:
test cycle 1 of 5
testing latency...
testing bandwidth to 10.1.1.9...
FAILED (0)
Restoring all interfaces...
Network test on device eth1 failed
...finished running ./network.py, exit code=1
recovered exit code=1
rhts-report-result /HTS/hts/network FAIL /var/log/hts/runs/2/network/output.log
saveOutput: /var/log/hts/runs/2/network/output.log
Return value was 0
running network on /org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_96
mkdir -p /tmp/hts-network-5Ats-H/mnt/tests/HTS/hts/network
cp -a testinfo.desc runtest.sh network.py Makefile
/tmp/hts-network-5Ats-H/mnt/tests/HTS/hts/network
install -m 0755 runtest.sh /tmp/hts-network-5Ats-H/mnt/tests/HTS/hts/network
make OUTPUTFILE=/var/log/hts/runs/2/network/output.log RUNMODE=normal
UDI=/org/freedesktop/Hal/devices/net_00_0d_60_1e_e0_96 TESTSERVER=10.1.1.9 run
chmod a+x ./runtest.sh ./network.py
./runtest.sh
/tmp/hts-network-5Ats-H/mnt/tests/HTS/hts/network/network.py
Running ./network.py:
1000+0 records in
1000+0 records out
131072000 bytes (131 MB) copied, 36.1804 seconds, 3.6 MB/s
connect: Connection timed out
connect: Connection timed out
using test server 10.1.1.9
Shutting down all interfaces...
    interface: eth1
 
Interface eth0 is down
waiting...
done
Bringing up interface eth0
Checking...
Interface eth0 is RUNNING
Checking via ping to 10.1.1.9...
ip address: 10.1.1.61
tcp test:
test cycle 1 of 5
testing latency...
testing bandwidth to 10.1.1.9...
FAILED (0)
Restoring all interfaces...
Network test on device eth0 failed
...finished running ./network.py, exit code=1
recovered exit code=1
rhts-report-result /HTS/hts/network FAIL /var/log/hts/runs/2/network/output.log
saveOutput: /var/log/hts/runs/2/network/output.log
Return value was 0
Opening virtualization test results /var/hts/virt-results.xml
No virtualization test results were found (virt-results.xml)


Comment 13 Archana K. Raghavan 2007-01-25 23:21:15 UTC
Created attachment 146648 [details]
results from a new run - from IBM

Comment 16 Greg Nichols 2007-02-06 13:40:30 UTC
Closed per above comments.