Bug 1732126 - docs required for migrating existing single to multisite config [NEEDINFO]
Summary: docs required for migrating existing single to multisite config
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 3.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 4.1
Assignee: ceph-docs@redhat.com
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks: 1727980
TreeView+ depends on / blocked
 
Reported: 2019-07-22 18:24 UTC by Tim Wilkinson
Modified: 2020-02-19 02:24 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
vumrao: needinfo? (asriram)


Attachments (Terms of Use)

Description Tim Wilkinson 2019-07-22 18:24:52 UTC
Description of problem:
----------------------
The existing docs for configuring multisite ceph assume a new ceph cluster on both ends but there are added steps to take if the master site contains active data intended for syncing to a secondary site.

Of the existing BZs we see on configuring multisite, the majority reference section 3.6.1 [1] but only bz 1647295 also includes a pointer to the ceph-ansible method [2] that I found was the only procedure that worked well for us, but even that assumed it would be creating new pools on the master site to then sync to a secondary site. We had to perform specific commands to let ceph-ansible know that the existing default pools were to be used for syncing. Specifically had to delete the 'site1' zone (including site1.* pools) and its zonegroup and realm. Then we followed the instructions in the 'migrate from a single site' doc to turn our default zone and zonegroup into the master zone/zonegroup for the multisite configuration. This preserved all our data in the existing default.* pools and started to sync it when the secondary site was configured.


[1]  https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/#configuring-a-multisite-ceph-object-gateway-install

[2]  https://github.com/ceph/ceph-ansible/blob/master/README-MULTISITE.md

[3]  http://docs.ceph.com/docs/master/radosgw/multisite/#migrating-a-single-site-system-to-multi-site



Component Version-Release:
-------------------------
7.6 (Maipo)   3.10.0-957.el7.x86_64
ceph-base.x86_64   2:12.2.8-128.el7cp



How reproducible:
----------------
consistent



Steps to Reproduce:
------------------
Apply ceph-ansible multisite playbook to master site. See that new pools are created as opposed to using existing pools.



Actual results:
--------------
Newly created pools are empty so secondary site config starts a sync that completes almost immediately. 



Expected results:
----------------
ceph-ansible should have a manner of being told whether to use existing cluster pools or create new ones.

Comment 1 Vikhyat Umrao 2019-07-29 19:26:16 UTC
Tim - I think this doc https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index#migrating-a-single-site-system-to-multi-site-rgw covers converting already exisiting clusters to multisite clusters?

Comment 2 Giridhar Ramaraju 2019-08-05 13:09:51 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 3 Giridhar Ramaraju 2019-08-05 13:11:02 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 4 Tim Wilkinson 2019-08-13 18:29:57 UTC
(In reply to Vikhyat Umrao from comment #1)
> Tim - I think this doc
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-
> single/object_gateway_guide_for_red_hat_enterprise_linux/index#migrating-a-
> single-site-system-to-multi-site-rgw covers converting already existing
> clusters to multisite clusters?

<soapbox>
I think the purpose of this bz was for customers who find themselves in a similar position where we had an existing cluster (with data to preserve and replicate) to which we wanted to apply a secondary site via the multisite playbook. Executing the playbook on the existing site and then on a secondary site did not accomplish that goal. It leaves the user in a situation where they must know how to undo what the playbook attempted and THEN apply the instructions in the aforementioned docs to convert their existing cluster into what multisite will accept as site1 PRIOR to running the ms playbook on both. Somewhere up front in the multisite docs it should distinguish whether the customer wishes to configure multisite on two new clusters and start fresh, or apply multisite to an existing cluster and have it replicated to a second so in that case it's clear they have to apply the steps outlined in the migrating-a-single-site-system-to-multi-site-rgw doc before they attempt to apply the ms playbook.
</soapbox>

Comment 5 Vikhyat Umrao 2019-08-13 18:53:32 UTC
(In reply to Tim Wilkinson from comment #4)
> (In reply to Vikhyat Umrao from comment #1)
> > Tim - I think this doc
> > https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-
> > single/object_gateway_guide_for_red_hat_enterprise_linux/index#migrating-a-
> > single-site-system-to-multi-site-rgw covers converting already existing
> > clusters to multisite clusters?
> 
> <soapbox>
> I think the purpose of this bz was for customers who find themselves in a
> similar position where we had an existing cluster (with data to preserve and
> replicate) to which we wanted to apply a secondary site via the multisite
> playbook. Executing the playbook on the existing site and then on a
> secondary site did not accomplish that goal. It leaves the user in a
> situation where they must know how to undo what the playbook attempted and
> THEN apply the instructions in the aforementioned docs to convert their
> existing cluster into what multisite will accept as site1 PRIOR to running
> the ms playbook on both. Somewhere up front in the multisite docs it should
> distinguish whether the customer wishes to configure multisite on two new
> clusters and start fresh, or apply multisite to an existing cluster and have
> it replicated to a second so in that case it's clear they have to apply the
> steps outlined in the migrating-a-single-site-system-to-multi-site-rgw doc
> before they attempt to apply the ms playbook.
> </soapbox>

Thanks, Tim for the feedback. I think this doc covers manual steps to convert the already existing single site to multi-site configuration. If I understand correct you looking for documentation to convert the single site to multisite with the help of ceph-ansible playbook. I think this feature is not available and hence it is not documented. Can you please open a Ceph Ansible feature request for converting the single site to multisite.

Comment 6 Tim Wilkinson 2019-08-14 14:57:47 UTC
That RFE already exists in bz 1739199. The only takeaway for me is that the multisite doc should determine whether the customer wishes to configure multisite on two new clusters or apply multisite to an existing cluster. Feel free to close.

Comment 7 Vikhyat Umrao 2019-08-14 15:22:23 UTC
(In reply to Tim Wilkinson from comment #6)
> That RFE already exists in bz 1739199. The only takeaway for me is that the
> multisite doc should determine whether the customer wishes to configure
> multisite on two new clusters or apply multisite to an existing cluster.
> Feel free to close.

Thanks, Tim. I agree with this part I will have doc team to mention the requirement in Multisite section that both clusters have to be new and also the doc[1] for multisite need in one place which is the object-storage guide[2].

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/#configuring-a-multisite-ceph-object-gateway-install

[2] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index#rgw-multisite-rgw


Note You need to log in before you can comment on or make changes to this bug.