Bug 1286587
Summary: | [geo-rep]: Attaching tier breaks the existing geo-rep session | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> |
Component: | geo-replication | Assignee: | Saravanakumar <sarumuga> |
Status: | CLOSED NOTABUG | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | urgent | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.1 | CC: | asriram, asrivast, avishwan, chrisw, csaba, lbailey, nchilaka, nlevinki, rcyriac, sankarshan, sarumuga |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | Flags: | sarumuga:
needinfo+
|
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
When geo-replication is in use alongside tiering, bricks attached as part of a tier are incorrectly set to passive. If geo-replication is subsequently restarted, these bricks can become faulty.
Workaround:
Stop geo-replication session prior to attaching or detaching bricks that are part of a tier.
To attach a tier:
1. Stop geo-replication:
# gluster volume geo-replication master_vol slave_host::slave_vol stop
2. Attach the tier:
# gluster volume attach-tier master_vol replica 2 <server1>:/path/to/brick1 <server2>:/path/to/brick2 [force]
3. Restart geo-replication:
# gluster volume geo-replication master_vol slave_host::slave_vol start
4. Verify that bricks in tier are available in geo-replication session:
# gluster volume geo-replication master_vol slave_host::slave_vol status
To detach a tier:
1. Detach the tier:
# gluster volume detach-tier master_vol start
2. Ensure all data in that tier is synced to the slave:
# gluster volume geo-replication master_vol slave_host::slave_vol config checkpoint now
3. Monitor checkpoint until displayed status is 'checkpoint as of <time of checkpoint creation> is completed at time'.
# gluster volume geo-replication master_vol slave_host::slave_vol status detail
4. Verify that detachment is complete:
# gluster volume detach-tier master_vol status
5. Stop geo-replication:
# gluster volume geo-replication master_vol slave_host::slave_vol stop
6. Commit tier detachment:
# gluster volume detach-tier master_vol commit
7. Verify tier is detached:
# gluster volume info master_vol
8. Restart geo-replication:
# gluster volume geo-replication master_vol slave_host::slave_vol start
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-10 03:55:29 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1268895, 1299184 |
Description
Rahul Hinduja
2015-11-30 09:46:31 UTC
I think documentation changes required for Attach Tier. Since Geo-rep worker behaves differently if it is worker for Cold brick or worker for Hot brick. If Geo-rep is not restarted after attaching Tier, already started workers will not know hot/cold brick unless it is restarted. Stop Geo-replication before attaching Tier. Changes looks fine. As per comment 10, We need to Stop Geo-replication before attaching Tier, documentation available in https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp8297696 Moving this bug to ON_QA When we attach Tier, Gluster rearranges Bricks details in Volume info to show Hot Tier bricks first, due to this Geo-replication will not work as expected when Tier is attached while Geo-rep is running. We need to stop Geo-replication before attach-tier. (Same is available in documentation https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Data_Tiering-Attach_Volumes.html#idp8297696). Please open new RFE to support attach tier while Geo-rep is running. Closing this bug as "NOTABUG" as discussed, please reopen if this requires fix. Thanks. |