Bug 999300 - quota-build: cthon special test fails
Summary: quota-build: cthon special test fails
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: krishnan parthasarathi
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-21 06:18 UTC by Saurabh
Modified: 2016-01-19 06:15 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-10-22 10:08:44 UTC
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2013-08-21 06:18:30 UTC
Description of problem:

cthon special test fails.

on this quota build. even if quota is not still enabled on this new volume.

whereas on the general downstream build, its a pass.

[root@rhsauto032 ~]# gluster volume info dist-rep3
 
Volume Name: dist-rep3
Type: Distributed-Replicate
Volume ID: 6aaeda5c-b6f6-42c2-8003-b4035f62085b
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d1r1-3
Brick2: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d1r2-3
Brick3: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d2r1-3
Brick4: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d2r2-3
Brick5: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d3r1-3
Brick6: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d3r2-3
Brick7: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d4r1-3
Brick8: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d4r2-3
Brick9: rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d5r1-3
Brick10: rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d5r2-3
Brick11: rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d6r1-3
Brick12: rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d6r2-3
[root@rhsauto032 ~]# 
[root@rhsauto032 ~]# 
[root@rhsauto032 ~]# gluster volume status dist-rep3
Status of volume: dist-rep3
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d1r
1-3                                                     49167   Y       11530
Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d1r
2-3                                                     49170   Y       31979
Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d2r
1-3                                                     49170   Y       13829
Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d2r
2-3                                                     49170   Y       13832
Brick rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d3r
1-3                                                     49168   Y       11541
Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d3r
2-3                                                     49171   Y       31990
Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d4r
1-3                                                     49171   Y       13840
Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d4r
2-3                                                     49171   Y       13843
Brick rhsauto032.lab.eng.blr.redhat.com:/rhs/bricks/d5r
1-3                                                     49169   Y       11552
Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/bricks/d5r
2-3                                                     49172   Y       32001
Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/bricks/d6r
1-3                                                     49172   Y       13851
Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/bricks/d6r
2-3                                                     49172   Y       13854
NFS Server on localhost                                 2049    Y       11564
Self-heal Daemon on localhost                           N/A     Y       11574
NFS Server on rhsauto035.lab.eng.blr.redhat.com         2049    Y       13866
Self-heal Daemon on rhsauto035.lab.eng.blr.redhat.com   N/A     Y       13881
NFS Server on rhsauto033.lab.eng.blr.redhat.com         2049    Y       32013
Self-heal Daemon on rhsauto033.lab.eng.blr.redhat.com   N/A     Y       32021
NFS Server on rhsauto034.lab.eng.blr.redhat.com         2049    Y       13863
Self-heal Daemon on rhsauto034.lab.eng.blr.redhat.com   N/A     Y       13871
 
There are no active volume tasks


Version-Release number of selected component (if applicable):
glusterfs-server-3.4.0.20rhsquota1-1.el6.x86_64
glusterfs-fuse-3.4.0.20rhsquota1-1.el6.x86_64
glusterfs-3.4.0.20rhsquota1-1.el6.x86_64


How reproducible:
alwasy

Steps to Reproduce:
1. create a 6x2 volume, start it
2. execute cthon special category of test
3.

Actual results:
[root@rhsauto039 cthon04]# ./server -s -p dist-rep3 -m /mnt/nfs-test rhsauto032.lab.eng.blr.redhat.com
Start tests on path /mnt/nfs-test/rhsauto039.test [y/n]? y

sh ./runtests  -s  /mnt/nfs-test/rhsauto039.test

SPECIAL TESTS: directory /mnt/nfs-test/rhsauto039.test
cd /mnt/nfs-test/rhsauto039.test; rm -f runtests runtests.wrk READWIN.txt Makefile op_unlk op_ren op_chmod dupreq excltest negseek rename holey truncate nfsidem nstat stat stat2 touchn fstat rewind telldir bigfile bigfile2 freesp
cp runtests runtests.wrk READWIN.txt Makefile op_unlk op_ren op_chmod dupreq excltest negseek rename holey truncate nfsidem nstat stat stat2 touchn fstat rewind telldir bigfile bigfile2 freesp /mnt/nfs-test/rhsauto039.test

check for proper open/unlink operation
nfsjunk files before unlink:
  ls: cannot access .nfs*: No such file or directory
./nfsJ0fjj3 open; unlink ret = -1
 unlink: No such file or directory
special tests failed
Tests failed, leaving /mnt/nfs-test mounted


Expected results:
cthon whole suite should pass.

Additional info:

Comment 2 krishnan parthasarathi 2013-10-17 09:06:26 UTC
Saurabh,
Could you check if the cthon failures are seen with glusterfs-3.4.0.35rhs build?
I tried the tests on my local setup and couldn't recreate the issue.

Comment 3 Susant Kumar Palai 2013-10-22 06:56:59 UTC
Ran this test on glusterfs 3.4.0.35.1u2rhs and it passed.


Note You need to log in before you can comment on or make changes to this bug.