Bug 1337779

Summary: tests/bugs/write-behind/1279730.t fails spuriously
Product: [Community] GlusterFS Reporter: Raghavendra G <rgowdapp>
Component: write-behindAssignee: Raghavendra G <rgowdapp>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.11CC: bugs, sarumuga
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.12 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1337777 Environment:
Last Closed: 2016-06-28 12:18:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1337777, 1337780, 1337781, 1426046    
Bug Blocks:    

Description Raghavendra G 2016-05-20 05:06:32 UTC
+++ This bug was initially created as a clone of Bug #1337777 +++

Description of problem:

14:35:56 [21:35:56] Running tests in file ./tests/bugs/write-behind/bug-1279730.t
14:36:00 No volumes present
14:36:01 read should've failed as previous write would've failed with EDQUOT, but its successfultar: Removing leading `/' from member names
14:36:03 ./tests/bugs/write-behind/bug-1279730.t .. 
14:36:03 1..15
14:36:03 ok 1, LINENUM:10
14:36:03 ok 2, LINENUM:11
14:36:03 ok 3, LINENUM:12
14:36:03 ok 4, LINENUM:14
14:36:03 ok 5, LINENUM:15
14:36:03 ok 6, LINENUM:16
14:36:03 ok 7, LINENUM:17
14:36:03 ok 8, LINENUM:18
14:36:03 ok 9, LINENUM:19
14:36:03 ok 10, LINENUM:21
14:36:03 ok 11, LINENUM:24
14:36:03 not ok 12 , LINENUM:26
14:36:03 FAILED COMMAND: ./tests/bugs/write-behind/bug-1279730 /mnt/glusterfs/0/file "gluster --mode=script --wignore volume quota patchy limit-usage / 1024"
14:36:03 ok 13, LINENUM:28
14:36:03 ok 14, LINENUM:30
14:36:03 ok 15, LINENUM:31
14:36:03 Failed 1/15 subtests 
14:36:03 
14:36:03 Test Summary Report
14:36:03 -------------------
14:36:03 ./tests/bugs/write-behind/bug-1279730.t (Wstat: 0 Tests: 15 Failed: 1)
14:36:03   Failed test:  12
14:36:03 Files=1, Tests=15,  7 wallclock secs ( 0.02 usr  0.00 sys +  1.18 cusr  0.39 csys =  1.59 CPU)
14:36:03 Result: FAIL
14:36:03 End of test ./tests/bugs/write-behind/bug-1279730.t

Version-Release number of selected component (if applicable):


How reproducible:
inconsistent, but surely there is some race causing this issue.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Vijay Bellur on 2016-05-20 01:05:52 EDT ---

REVIEW: http://review.gluster.org/14443 (tests/write-behind: move 1279730.t to BAD tests) posted (#1) for review on master by Raghavendra G (rgowdapp)

Comment 1 Vijay Bellur 2016-05-20 05:10:25 UTC
REVIEW: http://review.gluster.org/14444 (tests/write-behind: move 1279730.t to BAD tests) posted (#1) for review on release-3.7 by Raghavendra G (rgowdapp)

Comment 2 Vijay Bellur 2016-05-27 07:46:33 UTC
COMMIT: http://review.gluster.org/14444 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit c88c20f7932cd5b49db5a1ae963c683d4b465b4b
Author: Raghavendra G <rgowdapp>
Date:   Fri May 20 10:29:05 2016 +0530

    tests/write-behind: move 1279730.t to BAD tests
    
    There is a race condition which is causing the test to fail. For lack
    of bandwidth I am moving this test to BAD, though clearly there is
    some issue with codebase.
    
    BUG: 1337779
    Change-Id: If4f3eff8a5985f37a4dee65d2df29fa7b6bda7ae
    Signed-off-by: Raghavendra G <rgowdapp>
    Reviewed-on: http://review.gluster.org/14444
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 3 Kaushal 2016-06-28 12:18:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user