Bug 1332260 - [Doc RFE] Add upgrade steps 1.3.2 to 2.0 for Ubuntu in Install Guide
Summary: [Doc RFE] Add upgrade steps 1.3.2 to 2.0 for Ubuntu in Install Guide
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 2.0
Assignee: Aron Gunn
QA Contact: Tejas
URL:
Whiteboard:
: 1332835 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-02 16:29 UTC by Anjana Suparna Sriram
Modified: 2016-09-30 17:22 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-30 17:22:02 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2016-05-02 16:29:55 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 4 Anjana Suparna Sriram 2016-05-04 08:16:15 UTC
*** Bug 1332835 has been marked as a duplicate of this bug. ***

Comment 5 Tejas 2016-05-04 10:25:16 UTC
The required version of ubuntu is mentioned wrongly in :

https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-ubuntu/#operating_system

"Red Hat Ceph Storage 2.0 and beyond requires Ubuntu 14.04"

The version we are qualifying is Ubuntu 16.04

Comment 6 Ken Dreyer (Red Hat) 2016-05-04 14:35:00 UTC
(In reply to Tejas from comment #5)
> "Red Hat Ceph Storage 2.0 and beyond requires Ubuntu 14.04"
> 
> The version we are qualifying is Ubuntu 16.04

This needs to be cleared with Product Management, because RHCS 2.0 is going to ship for both Trusty (14.04) and Xenial (16.04).

Comment 7 Tejas 2016-06-01 11:16:43 UTC
Anjana,

Had a couple of doubts on the install guide for ceph 2.0 ubuntu:
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/version-2/installation-guide-for-ubuntu/#enabling_ceph_repositories

" configure the Installer repository on your administration node in order to install ceph-deploy, then use ceph-deploy to configure all other RHCS repositories. "
We do not use ceph-deploy in  ceph 2.0. Please check this once.

Also all the ceph-deploy steps for install of mon,osd is mentioned. PLease check this .
This method is not supported in ceph 2.0

Comment 8 Harish NV Rao 2016-06-09 09:58:21 UTC
The preview links mentioned in comment 3 are taking us to 1.3.2 docs. Need links to 2.0 Docs.

Please fix this asap.

Comment 10 Tejas 2016-06-10 06:04:16 UTC
Aaron,

In the ubuntu install guide:
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/2/installation-guide-for-ubuntu/

section 2.2 Enabling GA Ceph Repositories,
 has references to "ceph-deploy". 

Expected: Remove the references to ceph-deploy

Thanks,
Tejas

Comment 12 Tejas 2016-06-16 14:31:00 UTC
hi,

I have done a successful upgrade from ceph 1.3.2 to 2.0:
I needed a couple of clarifications:
1. The upgrade doc does not have any info on installing the calamari-lite that is supported in ceph 2.0.
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/2/installation-guide-for-red-hat-enterprise-linux/#upgrading_ceph_storage_cluster

Please add the relevant info for the calamari-lite in the install doc.

2. In a 1.3.2 cluster when customers have installed the older version of calamari, this does not upgrade to 2.0.
Need clarification on what is done with this existing calamari.

Thanks,
Tejas

Comment 14 Tejas 2016-06-21 10:08:25 UTC
hi Aaron,

    We would need a OS upgrade section from 14.04 Trusty to 16.04 Xenial in the 2.0 install doc. Currently we do not have this.
The ceph  1.3.2 install doc has a similar OS upgrade section:
https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/installation-guide-for-ubuntu/chapter-4-upgrade-ceph-cluster-on-ubuntu-precise-to-ubuntu-trusty

Thanks,
Tejas

Comment 16 Tejas 2016-06-21 14:30:13 UTC
Hi Aaron,

    yes, I havnt been able to take a look at the calamari part again. Currently we are focused on upgrades. Tamil and Warren have done the upgrades so far, they should be able to provide the required info.

Thanks,
Tejas

Comment 20 Harish NV Rao 2016-06-21 17:40:54 UTC
by mistake, I had moved bz to verified state...

Comment 23 Tejas 2016-06-23 09:07:18 UTC
Hi Aaron,

Thanks for the upgrade links preview.
I had one more request, we need a ISO based upgrade from ceph 1.3.2 to 2.0 in ubuntu, similar to what we have here:
https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/installation-guide-for-ubuntu/chapter-3-upgrading-your-storage-cluster

section 3.1.3

Since ISO based install is handled by ceph-ansible in 2.0, we have not done a manual ISO.

Thanks,
Tejas

Comment 24 Tejas 2016-06-24 14:34:50 UTC
Hi Aaron,

In the Ceph 2.0 ubuntu install doc, we need the following changes:
https://access.qa.redhat.com/documentation/en/red-hat-ceph-storage/2/installation-guide-for-ubuntu/#upgrading_ceph_storage_cluster

1. Before doing the upgrade, the ceph-mon or ceph-osd process needs to be
stopped.
sudo stop ceph-osd id={id}
sudo stop ceph-mon id={hostname}
sudo stop radosgw

2. this step needs to be changed:
$ sudo restart ceph-mon id=node1 cluster=ceph

The "restart" needs to be changed to "start"
$ sudo start ceph-mon id=node1 cluster=ceph

3. Also in 5.3, this command does not work:
sudo /etc/init.d/radosgw restart

This should be changed to :
sudo start radosgw id=rgw.{hostname}

Thanks,
Tejas

Comment 25 Ken Dreyer (Red Hat) 2016-06-24 14:51:31 UTC
(In reply to Tejas from comment #24)
> The "restart" needs to be changed to "start"
> $ sudo start ceph-mon id=node1 cluster=ceph

"start" is an Upstart command. I think you mean systemd instead? "systemctl start ceph-mon@node1"

> 3. Also in 5.3, this command does not work:
> sudo /etc/init.d/radosgw restart
> 
> This should be changed to :
> sudo start radosgw id=rgw.{hostname}

Same thing here - the "start" is an Upstart thing, whereas we only support Xenial/systemd for RHCS 2.0. The systemd command is "systemctl start ceph-radosgw@{hostname}"

Comment 31 Tejas 2016-08-08 07:24:20 UTC
All the changes that we asked for have been implemented.
Moving this to Verified for now.
We will either reopen this bug or open a new bug if more changes are needed to this doc.


Note You need to log in before you can comment on or make changes to this bug.