Bug 681149 - waitTime of fv_tests is not enough sometimes
Summary: waitTime of fv_tests is not enough sometimes
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Hardware Certification Program
Classification: Retired
Component: Test Suite (tests)
Version: 1.2
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Greg Nichols
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-03-01 09:59 UTC by chen yuwen
Modified: 2011-03-29 15:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-03-29 15:41:26 UTC


Attachments (Terms of Use)
fv_storage of kvm timeout (478.53 KB, application/octet-stream)
2011-03-08 02:21 UTC, chen yuwen
no flags Details

Description chen yuwen 2011-03-01 09:59:27 UTC
Description of problem:
waitTime of fv_tests is not enough sometimes. 
Especially, fv_storage of kvm virtualization testing often take more time than one hour.
Then the domain was destroyed and the image v7x86_64.img was damaged.

fv_storage testing timeout and was destroyed: 
http://lab.rhts.englab.brq.redhat.com/beaker/logs/tasks/1266320///TESTOUT.log

Version-Release number of selected component (if applicable):
v7-1.3-10

How reproducible:
very often

Steps to Reproduce:
1. install v7 on a HVM system
2. run: v7 run -t fv_storage
3. 
  
Actual results:
FAIL

Expected results:
PASS

Additional info:

Comment 1 Greg Nichols 2011-03-07 17:52:54 UTC
The link above isn't working - could you attach the log to this bug?

Comment 2 chen yuwen 2011-03-08 02:21:50 UTC
Created attachment 482825 [details]
fv_storage of kvm timeout

Job link: https://beaker.engineering.redhat.com/jobs/57094

Comment 3 chen yuwen 2011-03-16 10:28:52 UTC
Job link: https://beaker.engineering.redhat.com/jobs/62617

log of RHEL6.1 fv_storage test timeout: 
running fv_storage on 
installing test from /usr/share/v7/tests/fv_storage into /tmp/v7-fv_storage-ElzpOX
make[1]: Entering directory `/tmp/v7-fv_storage-ElzpOX'
chmod a+x ./runtest.sh ./fv_storage.py
make[1]: Leaving directory `/tmp/v7-fv_storage-ElzpOX'
Test Parameters: OUTPUTFILE=/var/log/v7/runs/1/fv_storage/output.log DEVICE= TESTSERVER=gnichols.usersys.redhat.com RUNMODE=auto DEBUG=off UDI= 
/proc/cpuinfo has vmx flag
Testing kvm virtualization
Starting libvirtd daemon: 
libvirtd is running
Verified that guest v7x86_64 is not running
Verified: /var/lib/libvirt/images/v7data.img
Verified: /var/lib/libvirt/images/v7x86_64.img
Verified: /etc/libvirt/qemu/v7x86_64.xml
Guest files verified
Unmounting /mnt
Using loopback device /dev/loop0 for guest data image
Warning: "kpartx -av /dev/loop0" has output on stderr
/dev/mapper/loop0p1: mknod for loop0p1 failed: File exists
Mounted /dev/mapper/loop0p1 on /mnt
v7 image: host: v7-1.2-25, guest: v7-1.2-25  build:  2011-01-07 14:58:24
Unmounting /mnt
Deleting loopback device /dev/loop0 configuration.
del devmap : loop0p1
Submitted tests: v7 run  --test storage --server gnichols.usersys.redhat.com
Domain v7x86_64 started

FV Guest started ...
fv_storage test is running, it takes some time(less than 60 minutes), please be patient ...
You may use "virsh console v7x86_64" to monitor the testing progress

MARK-LWD-LOOP -- 2011-03-16 00:02:36 --

MARK-LWD-LOOP -- 2011-03-16 00:07:36 --

MARK-LWD-LOOP -- 2011-03-16 00:17:36 --

MARK-LWD-LOOP -- 2011-03-16 00:27:36 --

MARK-LWD-LOOP -- 2011-03-16 00:37:36 --

MARK-LWD-LOOP -- 2011-03-16 00:47:36 --

MARK-LWD-LOOP -- 2011-03-16 00:52:37 --

MARK-LWD-LOOP -- 2011-03-16 00:57:37 --

MARK-LWD-LOOP -- 2011-03-16 01:07:36 --
time out: destroying the guest.
Domain v7x86_64 destroyed

Comment 4 Greg Nichols 2011-03-29 15:41:26 UTC
BZ 690797 moves the wait time to /etc/v7.xml, and this can be used to address specific systems.   Marking this NOTABUG.


Note You need to log in before you can comment on or make changes to this bug.