Bug 1369384
Summary: | [geo-replication]: geo-rep Status is not showing bricks from one of the nodes | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> | |
Component: | geo-replication | Assignee: | Aravinda VK <avishwan> | |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, csaba, pousley, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-1 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1373741 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 05:45:36 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1351528, 1373741, 1374630, 1374631, 1374632 |
Description
Rahul Hinduja
2016-08-23 09:20:22 UTC
Upstream patch sent to fix gsyncdstatus.py traceback. http://review.gluster.org/15416 Upstream mainline : http://review.gluster.org/15416 Upstream 3.8 : http://review.gluster.org/15448 downstream patch : https://code.engineering.redhat.com/gerrit/#/c/85005 Verified with the build: glusterfs-geo-replication-3.8.4-14.el6rhs.x86_64 Since it is not reproducible, simulated the scenario by moving the monitor.pid. It is correctly showing the status as stopped instead of initial not showing at all. [root@rhel6-1 ~]# ls /var/lib/glusterd/geo-replication/master_10.70.37.56_slave/monitor.pid /var/lib/glusterd/geo-replication/master_10.70.37.56_slave/monitor.pid [root@rhel6-1 ~]# [root@rhel6-1 ~]# mv /var/lib/glusterd/geo-replication/master_10.70.37.56_slave/monitor.pid /var/lib/glusterd/geo-replication/ [root@rhel6-1 ~]# gluster volum geo-replication master 10.70.37.56::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.37.94 master /bricks/brick0/master_brick0 root 10.70.37.56::slave N/A Stopped N/A N/A 10.70.37.94 master /bricks/brick1/master_brick4 root 10.70.37.56::slave N/A Stopped N/A N/A 10.70.37.94 master /bricks/brick2/master_brick8 root 10.70.37.56::slave N/A Stopped N/A N/A 10.70.37.157 master /bricks/brick0/master_brick1 root 10.70.37.56::slave 10.70.37.205 Active Changelog Crawl 2017-02-19 14:59:11 10.70.37.157 master /bricks/brick1/master_brick5 root 10.70.37.56::slave 10.70.37.205 Active Changelog Crawl 2017-02-19 14:59:22 10.70.37.157 master /bricks/brick2/master_brick9 root 10.70.37.56::slave 10.70.37.205 Active Changelog Crawl 2017-02-19 14:59:22 10.70.37.41 master /bricks/brick0/master_brick3 root 10.70.37.56::slave 10.70.37.200 Active Changelog Crawl 2017-02-19 14:59:08 10.70.37.41 master /bricks/brick1/master_brick7 root 10.70.37.56::slave 10.70.37.200 Active Changelog Crawl 2017-02-19 14:59:18 10.70.37.41 master /bricks/brick2/master_brick11 root 10.70.37.56::slave 10.70.37.200 Active Changelog Crawl 2017-02-19 14:59:14 10.70.37.199 master /bricks/brick0/master_brick2 root 10.70.37.56::slave 10.70.37.63 Passive N/A N/A 10.70.37.199 master /bricks/brick1/master_brick6 root 10.70.37.56::slave 10.70.37.63 Passive N/A N/A 10.70.37.199 master /bricks/brick2/master_brick10 root 10.70.37.56::slave 10.70.37.63 Passive N/A N/A [root@rhel6-1 ~]# [root@rhel6-1 ~]# python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py -c /var/lib/glusterd/geo-replication/master_10.70.37.56_slave/gsyncd.conf --status-get :master 10.70.37.56::slave --path /bricks/brick0/master_brick0 checkpoint_time: N/A last_synced_utc: N/A checkpoint_completion_time_utc: N/A checkpoint_completed: N/A meta: N/A entry: N/A slave_node: N/A data: N/A worker_status: Stopped checkpoint_completion_time: N/A checkpoint_completed_time: N/A last_synced: N/A checkpoint_time_utc: N/A failures: N/A crawl_status: N/A [root@rhel6-1 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |