Bug 1806499 - afr-lock-heal-basic.t and /afr-lock-heal-advanced.t fails when brick mux is enabled
Summary: afr-lock-heal-basic.t and /afr-lock-heal-advanced.t fails when brick mux is...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: tests
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-24 11:32 UTC by Ravishankar N
Modified: 2020-03-12 14:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-12 14:24:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 24160 0 None Open tests: fix afr-lock-heal-* failure 2020-02-24 11:36:50 UTC

Description Ravishankar N 2020-02-24 11:32:08 UTC
Description of problem:
There seems to be some problems with the statedump code when brick mux is enabled - it dumpsthe ACTIVE locks held by each inode multiple times on the statedump file. Because of that the .t fails due to mismatching of the actual and expected counts. While that needs to be fixed, I am currently filtering out the duplicates in the .t

Comment 1 Worker Ant 2020-02-24 11:36:52 UTC
REVIEW: https://review.gluster.org/24160 (tests: fix afr-lock-heal-* failure) posted (#2) for review on master by Ravishankar N

Comment 2 Worker Ant 2020-03-12 14:24:38 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/1042, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.