Description of problem: The test tests/basic/quota-anon-fd-nfs.t should check if nfs mount is available after starting the volume before proceeding with the test. As otherwise there can be spurious failures reported by this test, when the mount fails. One such example can be found here, https://build.gluster.org/job/centos6-regression/3286/consoleFull There could be other reasons for the spurious failure, but this one seems most likely from the logs. Version-Release number of selected component (if applicable): 3.10 and master Additional info: Paste of messages from the failure in the link above <-------------------> 22:47:41 [06:47:41] Running tests in file ./tests/basic/quota-anon-fd-nfs.t 22:47:46 No volumes present 22:47:58 mount.nfs: mounting slave32.cloud.gluster.org:/patchy failed, reason given by server: No such file or directory 22:48:00 umount2: Invalid argument 22:48:00 umount: /mnt/nfs/0: not mounted 22:48:01 umount2: Invalid argument 22:48:01 umount: /mnt/nfs/0: not mounted 22:48:02 umount2: Invalid argument 22:48:02 umount: /mnt/nfs/0: not mounted 22:48:03 umount2: Invalid argument 22:48:03 umount: /mnt/nfs/0: not mounted 22:48:04 umount2: Invalid argument 22:48:04 umount: /mnt/nfs/0: not mounted 22:48:10 ./tests/basic/quota-anon-fd-nfs.t .. 22:48:10 1..41 22:48:10 ok 1, LINENUM:15 22:48:10 ok 2, LINENUM:16 22:48:10 ok 3, LINENUM:17 22:48:10 ok 4, LINENUM:19 22:48:10 ok 5, LINENUM:20 22:48:10 ok 6, LINENUM:21 22:48:10 ok 7, LINENUM:42 22:48:10 ok 8, LINENUM:43 22:48:10 ok 9, LINENUM:45 22:48:10 ok 10, LINENUM:46 22:48:10 ok 11, LINENUM:48 22:48:10 ok 12, LINENUM:49 22:48:10 ok 13, LINENUM:50 22:48:10 ok 14, LINENUM:51 22:48:10 not ok 15 , LINENUM:53 22:48:10 FAILED COMMAND: mount_nfs slave32.cloud.gluster.org:/patchy /mnt/nfs/0 noac,soft,nolock,vers=3 <----------------------> <----------------------> 22:48:10 ./tests/basic/quota-anon-fd-nfs.t: bad status 1 22:48:10 22:48:10 ********************************* 22:48:10 * REGRESSION FAILED * 22:48:10 * Retrying failed tests in case * 22:48:10 * we got some spurous failures * 22:48:10 ********************************* 22:48:10 22:48:15 No volumes present 22:48:28 mount.nfs: requested NFS version or transport protocol is not supported 22:48:30 umount2: Invalid argument 22:48:30 umount: /mnt/nfs/0: not mounted 22:48:31 umount2: Invalid argument 22:48:31 umount: /mnt/nfs/0: not mounted 22:48:32 umount2: Invalid argument 22:48:32 umount: /mnt/nfs/0: not mounted 22:48:33 umount2: Invalid argument 22:48:33 umount: /mnt/nfs/0: not mounted 22:48:34 umount2: Invalid argument 22:48:34 umount: /mnt/nfs/0: not mounted 22:48:40 ./tests/basic/quota-anon-fd-nfs.t .. 22:48:40 1..41 22:48:40 ok 1, LINENUM:15 22:48:40 ok 2, LINENUM:16 22:48:40 ok 3, LINENUM:17 22:48:40 ok 4, LINENUM:19 22:48:40 ok 5, LINENUM:20 22:48:40 ok 6, LINENUM:21 22:48:40 ok 7, LINENUM:42 22:48:40 ok 8, LINENUM:43 22:48:40 ok 9, LINENUM:45 22:48:40 ok 10, LINENUM:46 22:48:40 ok 11, LINENUM:48 22:48:40 ok 12, LINENUM:49 22:48:40 ok 13, LINENUM:50 22:48:40 ok 14, LINENUM:51 22:48:40 not ok 15 , LINENUM:53 22:48:40 FAILED COMMAND: mount_nfs slave32.cloud.gluster.org:/patchy /mnt/nfs/0 noac,soft,nolock,vers=3 22:48:40 ok 16, LINENUM:55 <---------------------->
REVIEW: https://review.gluster.org/16701 (tests: Added check for NFS export availability to quota-anon-fd-nfs.t) posted (#1) for review on master by Shyamsundar Ranganathan (srangana)
COMMIT: https://review.gluster.org/16701 committed in master by Shyamsundar Ranganathan (srangana) ------ commit 7224adeb20e1db8d3582f8a68f725686fa9beb5b Author: Shyam <srangana> Date: Tue Feb 21 10:51:27 2017 -0500 tests: Added check for NFS export availability to quota-anon-fd-nfs.t Change-Id: I15a9441267c18bb1073d14db325c98fa497f2fb7 BUG: 1425515 Signed-off-by: Shyam <srangana> Reviewed-on: https://review.gluster.org/16701 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: sanoj-unnikrishnan <sunnikri>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/