Bug 1416024
| Summary: | Unable to take snapshot on a geo-replicated volume, even after stopping the session | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | |
| Component: | geo-replication | Assignee: | Kotresh HR <khiremat> | |
| Status: | CLOSED ERRATA | QA Contact: | Rochelle <rallan> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.2 | CC: | amukherj, asrivast, csaba, jgalvez, khiremat, rcyriac, rhs-bugs, storage-qa-internal | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.3.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.8.4-25 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1443977 1445591 (view as bug list) | Environment: | ||
| Last Closed: | 2017-09-21 04:30:55 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1417147, 1443977, 1445209, 1445213 | |||
|
Description
Sweta Anandpara
2017-01-24 11:50:20 UTC
Sosreport at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1416024/ [qe@rhsqe-repo 1416024]$ [qe@rhsqe-repo 1416024]$ hostname rhsqe-repo.lab.eng.blr.redhat.com [qe@rhsqe-repo 1416024]$ [qe@rhsqe-repo 1416024]$ pwd /home/repo/sosreports/1416024 [qe@rhsqe-repo 1416024]$ [qe@rhsqe-repo 1416024]$ ll total 205720 -rwxr-xr-x. 1 qe qe 65690780 Jan 24 17:57 sosreport-dhcp47-26.lab.eng.blr.redhat.com-20170124171546.tar.xz -rwxr-xr-x. 1 qe qe 46635984 Jan 24 17:57 sosreport-dhcp47-27.lab.eng.blr.redhat.com-20170124171556.tar.xz -rwxr-xr-x. 1 qe qe 51462404 Jan 24 17:57 sosreport-dhcp47-60.lab.eng.blr.redhat.com-20170124171526.tar.xz -rwxr-xr-x. 1 qe qe 46860840 Jan 24 17:57 sosreport-dhcp47-61.lab.eng.blr.redhat.com-20170124171538.tar.xz [qe@rhsqe-repo 1416024]$ WORKAROUND:
1. Stop geo-replication:
gluster vol geo-rep <mastervol> <user@slavehost>::<slavevol> stop
2. Restart glusterd service
3. Start geo-replication:
gluster vol geo-rep <mastervol> <user@slavehost>::<slavevol> start
Now take snapshot:
1. Pause/Stop geo-replication
2. Take snapshot
3. Resume/start geo-replication
Upstream Patch: https://review.gluster.org/#/c/17093/ Downstream Patch: https://code.engineering.redhat.com/gerrit/#/c/104414/ Validated with build : glusterfs-geo-replication-3.8.4-25.el7rhgs.x86_64
With stop and restart of glusterd
==================================
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:53:14
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:53:15
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.233 Active Changelog Crawl 2017-05-24 11:53:03
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.150 Passive N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.150 Passive N/A N/A
[root@dhcp41-160 ~]# service glusterd restart
Redirecting to /bin/systemctl restart glusterd.service
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave stop
Stopping geo-replication session between master & 10.70.43.52::slave has been successful
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Stopped N/A N/A
[root@dhcp41-160 ~]# gluster snapshot create SNAP1 master
snapshot create: success: Snap SNAP1_GMT-2017.05.24-11.56.44 created successfully
[root@dhcp41-160 ~]# gluster snapshot list
SNAP1_GMT-2017.05.24-11.56.44
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave start
Starting geo-replication session between master & 10.70.43.52::slave has been successful
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:54:59
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:54:45
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.150 Passive N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.150 Active Changelog Crawl 2017-05-24 11:54:48
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
With pause and restart of glusterd
===================================
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:54:59
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:54:45
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.150 Passive N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.150 Active Changelog Crawl 2017-05-24 11:54:48
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
[root@dhcp41-160 ~]# service glusterd restart
Redirecting to /bin/systemctl restart glusterd.service
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave pause
Pausing geo-replication session between master & 10.70.43.52::slave has been successful
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Paused N/A N/A
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Paused N/A N/A
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Paused N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Paused N/A N/A
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave N/A Paused N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave N/A Paused N/A N/A
[root@dhcp41-160 ~]# gluster snapshot create SNAP2 master
snapshot create: success: Snap SNAP2_GMT-2017.05.24-11.59.29 created successfully
[root@dhcp41-160 ~]# gluster snapshot list
SNAP1_GMT-2017.05.24-11.56.44
SNAP2_GMT-2017.05.24-11.59.29
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave resume
Resuming geo-replication session between master & 10.70.43.52::slave has been successful
[root@dhcp41-160 ~]# gluster volume geo-replication master 10.70.43.52::slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------
10.70.41.160 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:54:59
10.70.41.160 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.52 Active Changelog Crawl 2017-05-24 11:58:30
10.70.41.156 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.150 Passive N/A N/A
10.70.41.156 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.150 Active Changelog Crawl 2017-05-24 11:58:33
10.70.41.155 master /rhs/brick1/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
10.70.41.155 master /rhs/brick2/b1 root 10.70.43.52::slave 10.70.43.233 Passive N/A N/A
Basic validation is done. Moving this bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |