Bug 1285295
Summary: | [geo-rep]: Recommended Shared volume use on geo-replication is broken in latest build | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> | |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> | |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> | |
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asrivast, avishwan, byarlaga, chrisw, csaba, khiremat, nlevinki, rcyriac, sankarshan | |
Target Milestone: | --- | Keywords: | Regression, ZStream | |
Target Release: | RHGS 3.1.2 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.5-9 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1285488 (view as bug list) | Environment: | ||
Last Closed: | 2016-03-01 05:58:07 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1224928, 1260783, 1285488, 1287456 |
Description
Rahul Hinduja
2015-11-25 11:25:22 UTC
Patch sent upstream: http://review.gluster.org/#/c/12752/ The workaround to go ahead with the testing is to comment following lines in /usr/libexec/master.py 471 # Close the previously acquired lock so that 472 # fd will not leak. Reset fd to None 473 if gconf.mgmt_lock_fd: 474 os.close(gconf.mgmt_lock_fd) 475 gconf.mgmt_lock_fd = None 476 477 # Save latest FD for future use 478 gconf.mgmt_lock_fd = fd 484 # When previously Active becomes Passive, Close the 485 # fd of previously acquired lock 486 if gconf.mgmt_lock_fd: 487 os.close(gconf.mgmt_lock_fd) 488 gconf.mgmt_lock_fd = None Downstream patch https://code.engineering.redhat.com/gerrit/#/c/62769/ Verified with build: glusterfs-geo-replication-3.7.5-9.el7rhgs.x86_64 One brick from each subvolume becomes ACTIVE. Moving bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |