Description of problem: Posix test suite fails with mutiple errors. The issue is seen with glusterfs-nfs mount and 2x2 volume type. [root@rhs-client36 brick1]# gluster volume status Status of volume: vol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhs-client21.lab.eng.blr.redhat.com:/ rhs/brick1/d1r1 49154 0 Y 27614 Brick rhs-client36.lab.eng.blr.redhat.com:/ rhs/brick1/d1r2 49154 0 Y 27411 Brick rhs-client21.lab.eng.blr.redhat.com:/ rhs/brick1/d2r1 49155 0 Y 27631 Brick rhs-client36.lab.eng.blr.redhat.com:/ rhs/brick1/d2r2 49155 0 Y 27428 NFS Server on localhost 2049 0 Y 27449 Self-heal Daemon on localhost N/A N/A Y 27455 NFS Server on 10.70.36.45 2049 0 Y 27652 Self-heal Daemon on 10.70.36.45 N/A N/A Y 27658 Task Status of Volume vol0 ------------------------------------------------------------------------------ There are no active volume tasks Version-Release number of selected component (if applicable): glusterfs-3.7.0beta1-0.14.git09bbd5c.el7.centos.x86_64 How reproducible: always Steps to Reproduce: 1. create a gluster dist-rep volume, start it 2. mount the volume using glusterfs-nfs 3. execute posix test suite Actual results: Test Summary Report ------------------- /opt/qa/tools/pjd-fstest-20080816/tests/chmod/00.t (Wstat: 0 Tests: 58 Failed: 13) Failed tests: 12-16, 33-36, 45-48 /opt/qa/tools/pjd-fstest-20080816/tests/chown/00.t (Wstat: 0 Tests: 170 Failed: 26) Failed tests: 8-13, 103-107, 125-131, 146-149, 162-165 /opt/qa/tools/pjd-fstest-20080816/tests/link/00.t (Wstat: 0 Tests: 80 Failed: 32) Failed tests: 28-42, 44-46, 48, 50, 61-67, 74-77, 79 /opt/qa/tools/pjd-fstest-20080816/tests/link/10.t (Wstat: 0 Tests: 14 Failed: 2) Failed tests: 11-12 /opt/qa/tools/pjd-fstest-20080816/tests/mkdir/10.t (Wstat: 0 Tests: 12 Failed: 3) Failed tests: 10-12 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/00.t (Wstat: 0 Tests: 36 Failed: 31) Failed tests: 2-16, 18-23, 25-27, 29-35 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/02.t (Wstat: 0 Tests: 3 Failed: 2) Failed tests: 1-2 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/03.t (Wstat: 0 Tests: 11 Failed: 2) Failed tests: 5-6 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/05.t (Wstat: 0 Tests: 12 Failed: 4) Failed tests: 4-5, 9-10 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/06.t (Wstat: 0 Tests: 12 Failed: 4) Failed tests: 4-5, 9-10 /opt/qa/tools/pjd-fstest-20080816/tests/mkfifo/09.t (Wstat: 0 Tests: 12 Failed: 3) Failed tests: 10-12 /opt/qa/tools/pjd-fstest-20080816/tests/open/17.t (Wstat: 0 Tests: 3 Failed: 3) Failed tests: 1-3 /opt/qa/tools/pjd-fstest-20080816/tests/open/22.t (Wstat: 0 Tests: 12 Failed: 2) Failed tests: 7-8 /opt/qa/tools/pjd-fstest-20080816/tests/rename/00.t (Wstat: 0 Tests: 79 Failed: 20) Failed tests: 22-24, 26-31, 33-35, 55-58, 71-74 /opt/qa/tools/pjd-fstest-20080816/tests/rename/09.t (Wstat: 0 Tests: 56 Failed: 12) Failed tests: 23-25, 27-29, 31-33, 35-37 /opt/qa/tools/pjd-fstest-20080816/tests/rename/10.t (Wstat: 0 Tests: 188 Failed: 29) Failed tests: 12, 27, 42, 83-85, 89-90, 95, 98-100, 104-105 110, 113-115, 119-120, 125, 128-130, 132-133 144, 159, 174 /opt/qa/tools/pjd-fstest-20080816/tests/rename/13.t (Wstat: 0 Tests: 17 Failed: 11) Failed tests: 7-17 /opt/qa/tools/pjd-fstest-20080816/tests/rename/14.t (Wstat: 0 Tests: 17 Failed: 4) Failed tests: 7-8, 10-11 /opt/qa/tools/pjd-fstest-20080816/tests/rename/20.t (Wstat: 0 Tests: 16 Failed: 5) Failed tests: 9-11, 13, 16 /opt/qa/tools/pjd-fstest-20080816/tests/rmdir/01.t (Wstat: 0 Tests: 14 Failed: 3) Failed tests: 12-14 /opt/qa/tools/pjd-fstest-20080816/tests/rmdir/06.t (Wstat: 0 Tests: 20 Failed: 4) Failed tests: 17-20 /opt/qa/tools/pjd-fstest-20080816/tests/unlink/00.t (Wstat: 0 Tests: 55 Failed: 16) Failed tests: 10-12, 19-23, 28-31, 39-42 /opt/qa/tools/pjd-fstest-20080816/tests/unlink/11.t (Wstat: 0 Tests: 33 Failed: 9) Failed tests: 14-22 Files=184, Tests=1954, 90 wallclock secs ( 1.62 usr 0.52 sys + 13.50 cusr 14.64 csys = 30.28 CPU) Result: FAIL Expected results: posix test suite should be not see these failures Additional info:
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.