Bug 1132496 - Tests execution results in tests failure and system hang
Summary: Tests execution results in tests failure and system hang
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: 3.5.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-21 12:53 UTC by Kiran
Modified: 2016-06-17 15:57 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-06-17 15:57:47 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kiran 2014-08-21 12:53:05 UTC
Description of problem:

Running the following tests results in failure
tests/bugs/bug-1004744.t
tests/bugs/bug-824753.t
tests/bugs/bug-963541.t
tests/features/glupy.t

The following testcase hangs the system
tests/bugs/bug-860663.t

Version-Release number of selected component (if applicable):
gluster v3.5.2

How reproducible:
Always

Steps to Reproduce:
1. Install gluster v3.5.2 rpm on CentOS 6.4
2. clone the gluster from github and checkout v3.5.2 
3. ./run-tests.sh

Actual results:
Test Summary Report
-------------------
./tests/bugs/bug-824753.t                       (Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  11
./tests/bugs/bug-963541.t                       (Wstat: 0 Tests: 13 Failed: 3)
  Failed tests:  8-9, 13
./tests/features/glupy.t                        (Wstat: 0 Tests: 6 Failed: 2)
  Failed tests:  2, 6


Expected results:
All testcases should pass

Additional info:
For bug-1004744.t, I changed the line with EXPECT_WITHIN 20 to EXPECT_WITHIN 30 and testcase passed

The glupy.t is failing becuase of the volume vol-glupy. I commented the vol-glupy and with only vol-posix the testcase passes

Result of test glupy.t execution

[root@fractal-0e6e glusterfs]# DEBUG=1 prove tests/features/glupy.t
tests/features/glupy.t .. =========================
TEST 1 (line 7): mkdir -p /d/backends/glupytest
tests/features/glupy.t .. 1/6 RESULT 1: 0
=========================
TEST 2 (line 21): glusterfs -f /d/backends/glupytest.vol /mnt/glusterfs/0
RESULT 2: 1
=========================
TEST 3 (line 23): touch /mnt/glusterfs/0/filename
RESULT 3: 0
=========================
TEST 4 (line 24): filename ls /mnt/glusterfs/0
RESULT 4: 0
=========================
TEST 5 (line 25): rm -f /mnt/glusterfs/0/filename
RESULT 5: 0
=========================
TEST 6 (line 27): umount -l /mnt/glusterfs/0
umount: /mnt/glusterfs/0: not mounted
RESULT 6: 1
tests/features/glupy.t .. Failed 2/6 subtests 

Test Summary Report
-------------------
tests/features/glupy.t (Wstat: 0 Tests: 6 Failed: 2)
  Failed tests:  2, 6
Files=1, Tests=6,  2 wallclock secs ( 0.09 usr  0.02 sys +  0.09 cusr  0.37 csys =  0.57 CPU)
Result: FAIL


Result of test bug-963541.t execution

[root@fractal-0e6e glusterfs]# DEBUG=1 prove tests/bugs/bug-963541.t
tests/bugs/bug-963541.t .. =========================
TEST 1 (line 7): glusterd
tests/bugs/bug-963541.t .. 1/13 RESULT 1: 0
=========================
TEST 2 (line 8): pidof glusterd
RESULT 2: 0
=========================
TEST 3 (line 10): gluster --mode=script volume create patchy fractal-0e6e:/d/backends/patchy1 fractal-0e6e:/d/backends/patchy2 fractal-0e6e:/d/backends/patchy3
tests/bugs/bug-963541.t .. 3/13 RESULT 3: 0
=========================
TEST 4 (line 11): gluster --mode=script volume start patchy
tests/bugs/bug-963541.t .. 4/13 RESULT 4: 0
=========================
TEST 5 (line 14): gluster --mode=script volume remove-brick patchy fractal-0e6e:/d/backends/patchy1 start
tests/bugs/bug-963541.t .. 5/13 RESULT 5: 0
=========================
TEST 6 (line 16): ! gluster --mode=script volume rebalance patchy start
RESULT 6: 0
=========================
TEST 7 (line 17): ! gluster --mode=script volume remove-brick patchy fractal-0e6e:/d/backends/patchy2 start
RESULT 7: 0
=========================
TEST 8 (line 20): gluster --mode=script volume remove-brick patchy fractal-0e6e:/d/backends/patchy1 commit
volume remove-brick commit: failed: use 'force' option as migration is in progress
RESULT 8: 1
=========================
TEST 9 (line 24): gluster --mode=script volume rebalance patchy start
volume rebalance: patchy: failed: A remove-brick task on volume patchy is not yet committed. Either commit or stop the remove-brick task.
RESULT 9: 1
=========================
TEST 10 (line 25): gluster --mode=script volume rebalance patchy stop
tests/bugs/bug-963541.t .. 10/13 RESULT 10: 0
=========================
TEST 11 (line 27): gluster --mode=script volume remove-brick patchy fractal-0e6e:/d/backends/patchy2 start
tests/bugs/bug-963541.t .. 11/13 RESULT 11: 0
=========================
TEST 12 (line 28): gluster --mode=script volume remove-brick patchy fractal-0e6e:/d/backends/patchy2 stop
tests/bugs/bug-963541.t .. 12/13 RESULT 12: 0
=========================
TEST 13 (line 30): gluster --mode=script volume stop patchy
volume stop: patchy: failed: rebalance session is in progress for the volume 'patchy'
RESULT 13: 1
tests/bugs/bug-963541.t .. Failed 3/13 subtests 

Test Summary Report
-------------------
tests/bugs/bug-963541.t (Wstat: 0 Tests: 13 Failed: 3)
  Failed tests:  8-9, 13
Files=1, Tests=13, 17 wallclock secs ( 0.10 usr  0.02 sys +  0.26 cusr  0.50 csys =  0.88 CPU)
Result: FAIL



Result of test bug-824753.t execution

[root@fractal-0e6e glusterfs]# DEBUG=1 prove tests/bugs/bug-824753.t
tests/bugs/bug-824753.t .. =========================
TEST 1 (line 8): glusterd
tests/bugs/bug-824753.t .. 1/16 RESULT 1: 0
=========================
TEST 2 (line 9): pidof glusterd
RESULT 2: 0
=========================
TEST 3 (line 10): gluster --mode=script volume info
No volumes present
RESULT 3: 0
=========================
TEST 4 (line 12): gluster --mode=script volume create patchy replica 2 stripe 2 fractal-0e6e:/d/backends/patchy1 fractal-0e6e:/d/backends/patchy2 fractal-0e6e:/d/backends/patchy3 fractal-0e6e:/d/backends/patchy4 fractal-0e6e:/d/backends/patchy5 fractal-0e6e:/d/backends/patchy6 fractal-0e6e:/d/backends/patchy7 fractal-0e6e:/d/backends/patchy8
tests/bugs/bug-824753.t .. 4/16 RESULT 4: 0
=========================
TEST 5 (line 23): patchy volinfo_field patchy Volume Name
RESULT 5: 0
=========================
TEST 6 (line 24): Created volinfo_field patchy Status
RESULT 6: 0
=========================
TEST 7 (line 27): gluster --mode=script volume start patchy
tests/bugs/bug-824753.t .. 7/16 RESULT 7: 0
=========================
TEST 8 (line 28): Started volinfo_field patchy Status
RESULT 8: 0
=========================
TEST 9 (line 30): glusterfs -s fractal-0e6e --volfile-id=patchy /mnt/glusterfs/0
RESULT 9: 0
=========================
TEST 10 (line 33): gcc -g tests/bugs/bug-824753-file-locker.c -o tests/bugs/file-locker
tests/bugs/bug-824753.t .. 10/16 RESULT 10: 0
=========================
TEST 11 (line 35): tests/bugs/file-locker patchy fractal-0e6e /d/backends /mnt/glusterfs/0 file1
RESULT 11: 255
=========================
TEST 12 (line 38): rm -f tests/bugs/file-locker
RESULT 12: 0
=========================
TEST 13 (line 39): gluster --mode=script volume stop patchy
tests/bugs/bug-824753.t .. 13/16 RESULT 13: 0
=========================
TEST 14 (line 40): Stopped volinfo_field patchy Status
RESULT 14: 0
=========================
TEST 15 (line 42): gluster --mode=script volume delete patchy
RESULT 15: 0
=========================
TEST 16 (line 43): ! gluster --mode=script volume info patchy
RESULT 16: 0
tests/bugs/bug-824753.t .. Failed 1/16 subtests 

Test Summary Report
-------------------
tests/bugs/bug-824753.t (Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  11
Files=1, Tests=16, 10 wallclock secs ( 0.09 usr  0.01 sys +  0.38 cusr  0.67 csys =  1.15 CPU)
Result: FAIL

Comment 1 Niels de Vos 2016-06-17 15:57:47 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.