Bug 987980
| Summary: | Dist-geo-rep : after remove brick commit from the machine having multiple bricks, the change_detector becomes xsync. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vijaykumar Koppad <vkoppad> |
| Component: | geo-replication | Assignee: | Aravinda VK <avishwan> |
| Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.1 | CC: | aavati, annair, avishwan, csaba, david.macdonald, rhinduja, rhs-bugs, vagarwal |
| Target Milestone: | --- | ||
| Target Release: | RHGS 3.1.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.0-2.el6rhs | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-07-29 04:28:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1026831 | ||
| Bug Blocks: | 1202842, 1223636 | ||
|
Description
Vijaykumar Koppad
2013-07-24 13:47:27 UTC
This happens still in the build glusterfs-3.4.0.44rhs-1 and also all the passive gsyncd crashed with the trace-back, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2013-11-20 12:35:35.899013] I [master(/bricks/brick6):426:crawlwrap] _GMaster: crawl interval: 60 seconds [2013-11-20 12:35:35.905201] E [syncdutils(/bricks/brick6):207:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 540, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1156, in service_loop g1.crawlwrap(oneshot=True) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 457, in crawlwrap self.slave.server.set_stime(self.FLAT_DIR_HIERARCHY, self.uuid, cluster_stime) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1143, in <lambda> slave.server.set_stime = types.MethodType(lambda _self, path, uuid, mark: brickserver.set_stime(path, uuid + '.' + gconf.slave_id, mark), slave.server) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 299, in ff return f(*a) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 484, in set_stime Xattr.lsetxattr(path, '.'.join([cls.GX_NSPACE, uuid, 'stime']), struct.pack('!II', *mark)) File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 66, in lsetxattr cls.raise_oserr() File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 25, in raise_oserr raise OSError(errn, os.strerror(errn)) OSError: [Errno 2] No such file or directory >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This happens still in the build glusterfs-3.4.0.59rhs-1, although the crash of gsyncd didn't happen. Verified with the build: glusterfs-3.7.1-7.el6rhs.x86_64 With the new steps as mentioned at comment 8, the geo-rep session needs to be stopped before commit. After commit, restarting the geo-rep correctly goes to History and Then Changelog. Moving this bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |