Bug 1475724
Summary: | [RFE] Add geo-replication support in gdeploy | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sachidananda Urs <surs> |
Component: | gdeploy | Assignee: | Sachidananda Urs <surs> |
Status: | CLOSED ERRATA | QA Contact: | Mugdha Soni <musoni> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.3 | CC: | amukherj, asriram, avishwan, khiremat, rcyriac, rhinduja, rhs-bugs, sanandpa, sheggodu, storage-qa-internal, vdas |
Target Milestone: | --- | Keywords: | Rebase |
Target Release: | RHGS 3.5.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | gdeploy-2.0.2-32 | Doc Type: | Enhancement |
Doc Text: |
Configuring geo-replication is a lengthy and error prone process. Support for geo-replication is now provided by gdeploy, automating and reducing error in this process.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-10-30 12:19:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1707704, 1720992 | ||
Bug Blocks: | 1696803 |
Description
Sachidananda Urs
2017-07-27 08:39:41 UTC
*** Bug 1477726 has been marked as a duplicate of this bug. *** Patches that fix this bug: https://github.com/gluster/gdeploy/commit/31c856e1a9 https://github.com/gluster/gdeploy/commit/a1447c4304 https://github.com/gluster/gdeploy/commit/937ea4b540 https://github.com/gluster/gdeploy/commit/65a1f9b56e https://github.com/gluster/gdeploy/commit/68c8dfbaf1 https://github.com/gluster/gdeploy/commit/66196718e4 https://github.com/gluster/gdeploy/commit/bb06bbb016 https://github.com/gluster/gdeploy/commit/c0eee35f75 https://github.com/gluster/gdeploy/commit/feb0755c35 https://github.com/gluster/gdeploy/commit/94c238ed08 https://github.com/gluster/gdeploy/commit/1a8dc42edd *** Bug 1632245 has been marked as a duplicate of this bug. *** *** Bug 1511491 has been marked as a duplicate of this bug. *** Tested the following with gdeploy-2.0.2-34.el7rhgs.noarch and ansible-2.8.1-1.el7ae.noarch for root session and it works fine. i. Create geo-replication sessions ii. Delete geo-replication sessions iii. Start/Stop geo-replication sessions iv. Pause /Resume geo-rep sessions Once non root is fixed will move the bug to verified. Tested with the following :- 1.glusterfs-server-6.0-9.el7rhgs 2.gdeploy-2.0.2-34.el7rhgs.noarch 3.ansible-2.8.2-1.el7ae.noarch The following scenarios were tested for non-root session :- 1.Created a non-root session and started it MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp35-188.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-201.lab.eng.blr.redhat.com Passive N/A N/A dhcp35-117.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-26.lab.eng.blr.redhat.com Active Changelog Crawl 2019-07-24 14:35:44 dhcp35-93.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-18.lab.eng.blr.redhat.com Passive N/A N/A 2.Pause a georep session [root@dhcp35-188 ~]# gluster v geo-replication master testgeorep.35.26::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp35-188.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Paused N/A N/A dhcp35-93.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Paused N/A N/A dhcp35-117.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Paused N/A N/A 3.Resumed the session ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp35-188.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-201.lab.eng.blr.redhat.com Active Changelog Crawl 2019-07-24 14:35:55 dhcp35-93.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-18.lab.eng.blr.redhat.com Passive N/A N/A dhcp35-117.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave dhcp35-26.lab.eng.blr.redhat.com Passive N/A N/A 4.Stopped the session [root@dhcp35-188 ~]# gluster v geo-replication master testgeorep.35.26::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp35-188.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Stopped N/A N/A dhcp35-117.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Stopped N/A N/A dhcp35-93.lab.eng.blr.redhat.com master /mnt/brick2/master testgeorep testgeorep.35.26::slave N/A Stopped N/A N/A 5.Deleted the session [root@dhcp35-188 ~]# gluster v geo-replication master testgeorep.35.26::slave status No active geo-replication sessions between master and testgeorep.35.26::slave geo-replication command failed On the basis of above output moving the bug to verified state . Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3250 |