Bug 1217766 - Spurious failures in tests/bugs/distribute/bug-1122443.t
Summary: Spurious failures in tests/bugs/distribute/bug-1122443.t
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-01 14:47 UTC by Pranith Kumar K
Modified: 2016-06-16 12:57 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Clone Of:
Environment:
Last Closed: 2016-06-16 12:57:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2015-05-01 14:47:19 UTC
Description of problem:
I see the following sometimes when I run the test:
ok 8
not ok 9 Got "in" instead of "completed"
FAILED COMMAND: completed remove_brick_status_completed_field patchy pranithk-laptop:/d/backends/patchy0
volume remove-brick commit: failed: use 'force' option as migration is in progress
not ok 10
FAILED COMMAND: gluster --mode=script --wignore volume remove-brick patchy pranithk-laptop:/d/backends/patchy0 commit
ok 11
ok 12

This is happening because rebalance is not yet complete in 10 seconds.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-05-01 14:48:21 UTC
REVIEW: http://review.gluster.org/10487 (tests: Use REBALANCE_TIMEOUT in EXPECT_WITHIN) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2015-05-02 02:27:29 UTC
REVIEW: http://review.gluster.org/10487 (tests: Fix spurious failures) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 3 Anand Avati 2015-05-02 08:39:34 UTC
REVIEW: http://review.gluster.org/10491 (glupy-test: Add logfile for glupy) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 4 Anand Avati 2015-05-03 11:41:17 UTC
REVIEW: http://review.gluster.org/10487 (tests: Fix spurious failures) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 5 Anand Avati 2015-05-09 20:53:20 UTC
REVIEW: http://review.gluster.org/10584 (tests: Fix spurious failures) posted (#2) for review on release-3.7 by Niels de Vos (ndevos)

Comment 6 Pranith Kumar K 2015-05-16 06:49:09 UTC
commit 7c4d103700f0bbe0c5e134f743a68f370e5600be
Author: Pranith Kumar K <pkarampu>
Date:   Fri May 1 20:12:50 2015 +0530

    tests: Fix spurious failures
    
    - Use REBALANCE_TIMEOUT in EXPECT_WITHIN
    - Use fdatasync to prevent write-behind from giving success
    - Add logfile to glupy
    
    Change-Id: I51ab51644aaa4aa9d49f185e7b8959bb58be966b
    BUG: 1217766
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/10487
    Reviewed-by: Niels de Vos <ndevos>
    Tested-by: Gluster Build System <jenkins.com>

It is merged already.

Comment 7 Raghavendra Talur 2016-03-08 20:22:47 UTC
"tests" component is for tests framework only.
File a bug under test component if you find a bug in 
1. any of the *.rc files under tests/ 
2. run-tests.sh


For everything else, the bug should be filed on
1. component which is being tested by .t file if the .t file requires fix.
2. component which is causing a valid .t file to fail in regression.

I have used my best judgement here to move the bug to right component.
In case of ambiguity, I have placed the blame on the .t file component.

Please consider test bugs under the same backlog list that tracks other bugs in your component.

Comment 8 Niels de Vos 2016-06-16 12:57:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.