Bug 1128730

Summary: [RFE] Provide account id used for georeplication
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Pablo Iranzo Gómez <pablo.iranzo>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED DUPLICATE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.1CC: aavati, avishwan, csaba, nlevinki, sauchter
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-09-05 05:45:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pablo Iranzo Gómez 2014-08-11 12:34:17 UTC
Hi

The gluster volume geo-replication status command should show the full slavevol URL which includes the secure account ID otherwise you can only guess at what the account is, e.g.

I try to get the status of a geo-rep vol but forget the geoaccount bit:-

[root@host ~]# gluster v ge test-replica-vol host2::test-replica-vol-dr status detail
No active geo-replication sessions between test-replica-vol and host2::test-replica-vol-dr

So I list all geo-repo vols:-

[root@host ~]# gluster v ge stat

MASTER NODE      MASTER VOL          MASTER BRICK                      SLAVE                                 STATUS              CHECKPOINT STATUS    CRAWL STATUS
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
host    test-replica-vol    /rhss/brick01/test-replica-vol    host2::test-replica-vol-dr    Config Corrupted    N/A                  N/A
host3    test-replica-vol    /rhss/brick01/test-replica-vol    host2::test-replica-vol-dr    Config Corrupted    N/A                  N/A

Now I put it in the geoaccount and all works:-

[root@host ~]# gluster v ge test-replica-vol geoaccount@host2::test-replica-vol-dr status detail

MASTER NODE      MASTER VOL          MASTER BRICK                      SLAVE                                 STATUS     CHECKPOINT STATUS    CRAWL STATUS       FILES SYNCD    FILES PENDING    BYTES PENDING    DELETES PENDING    FILES SKIPPED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
host    test-replica-vol    /rhss/brick01/test-replica-vol    host2::test-replica-vol-dr    Active     N/A                  Changelog Crawl    15             0                0                0                  0
host3    test-replica-vol    /rhss/brick01/test-replica-vol    host2::test-replica-vol-dr    Passive    N/A                  N/A                0              0                0                0                  0
[root@host ~]#

Which is fine but that only worked because I knew about the geoaccount? Not so good for anyone who is not aware of the setup of the host.


Expected results:

[root@host ~]# gluster v ge stat

MASTER NODE      MASTER VOL          MASTER BRICK                      SLAVE                                 STATUS              CHECKPOINT STATUS    CRAWL STATUS
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
host    test-replica-vol    /rhss/brick01/test-replica-vol    geoaccount@host2::test-replica-vol-dr    Config Corrupted    N/A                  N/A
host3    test-replica-vol    /rhss/brick01/test-replica-vol    geoaccount@host2::test-replica-vol-dr    Config Corrupted    N/A                  N/A
[root@host ~]#

Comment 3 Aravinda VK 2014-09-05 05:45:41 UTC

*** This bug has been marked as a duplicate of bug 1104112 ***