Bug 1272393 - [Doc Tracker bug] Install/upgrade procedures for RHCS 1.3.1 for Ubuntu Trusty
Summary: [Doc Tracker bug] Install/upgrade procedures for RHCS 1.3.1 for Ubuntu Trusty
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 1.3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 1.3.1
Assignee: ceph-docs@redhat.com
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-16 10:20 UTC by Anjana Suparna Sriram
Modified: 2015-12-18 09:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-18 09:59:05 UTC
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2015-10-16 10:20:53 UTC
Additional info: Tracker bug aims to capture all the changes made to the install/upgrade procedure for RHCS 1.3. 1 for Ubuntu Trusty

Comment 2 Nilamdyuti 2015-10-16 12:30:25 UTC
Doc link that was provided to QE: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/tree/devel

QE is currently testing the procedure.

Comment 3 Hemanth Kumar 2015-10-30 13:03:30 UTC
(In reply to Nilamdyuti from comment #2)
> Doc link that was provided to QE:
> https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/tree/devel
> 
> QE is currently testing the procedure.


Nilam,

Following are the changes which I observed.

In,
https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc

----
Upgrading v1.2.3 to v1.3.1 for online repo based installations

1 .Monitor Node
---------------------
Step 3 . Update the monitor node:
      sudo apt-get update

sudo-apt-get update will automatically update all the ceph packages and there is no need of running "ceph-deploy install...."
So, I would like to see "ceph-deploy install --no-adjust-repos ... " before apt-get update.

(Same thing implies for OSD and RGW steps also)

2. Gateway Node
------------------------

a.) In Installation guide ,we have used  "sudo service radosgw restart id=rgw.<short-hostname> " to restart gateway daemon whereas in Upgrade guide we have used "sudo /etc/init.d/radosgw stop",
There is no consistency across the documents.
Lets have "sudo service radosgw restart id=rgw.<short-hostname> " command whenever it requires restarting a gateway service
(Lets make sure the same consistency is maintained for OSD , MON as well)

b). For federated deployments, from the Ceph Object Gateway node, execute:
     sudo yum install radosgw-agent.
This must be changed to "  sudo apt-get install radosgw-agent." as yum install is used for RHEL.

3. After upgrading MON and OSD to 1.3.1 using Online Repos " Connect Monitor/OSD Hosts to Calamari " step is missing. This updates the Calamari client packages on the Hosts.


Upgrading v1.3.0 to v1.3.1 for online repo based installations
-------------------------------------------------------------------------------------

1. While upgrading to 1.3.1 from 1.3.0, removing the repositories is not required.
   latest packeges with  " dot releases" can be obtained by just runnning "apt-get update" or "ceph-deploy Install ... " once the packages are Hosted in the Site. 
Repoving repos are required only while upgrading from 1.2.3.
Also, there won't be any calamari-minion.list,  ceph.list, calamari-server.list, ceph-deploy.list in /etc/apt/sources.list.d/. 
Hence, this step can be removed. 
Let's confirm this with Alfredo or ktdreyer. As Online repos is available from 1.3.1 and 1.3.0 has only ISO based installation.

Comment 4 Ken Dreyer (Red Hat) 2015-10-30 15:00:27 UTC
(In reply to Hemanth Kumar from comment #3)
> Repoving repos are required only while upgrading from 1.2.3.
> Also, there won't be any calamari-minion.list,  ceph.list,
> calamari-server.list, ceph-deploy.list in /etc/apt/sources.list.d/. 
> Hence, this step can be removed. 
> Let's confirm this with Alfredo or ktdreyer. As Online repos is available
> from 1.3.1 and 1.3.0 has only ISO based installation.

Yes, sounds right to me.

Comment 5 Ken Dreyer (Red Hat) 2015-10-30 15:27:58 UTC
Nilam pointed out to me in IRC that we'll actually want to clear out the ".list" files in both cases:

A. upgrading from 1.2.z to 1.3.1's online repos
B. upgrading from 1.3.0 to 1.3.1's online repos

In both cases we need to write new .list files for the online repos.

Comment 6 Nilamdyuti 2015-10-30 17:55:42 UTC
(In reply to Hemanth Kumar from comment #3)
> (In reply to Nilamdyuti from comment #2)
> > Doc link that was provided to QE:
> > https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> > guide-ubuntu/tree/devel
> > 
> > QE is currently testing the procedure.
> 
> 
> Nilam,
> 
> Following are the changes which I observed.
> 
> In,
> https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc
> 
> ----
> Upgrading v1.2.3 to v1.3.1 for online repo based installations
> 
> 1 .Monitor Node
> ---------------------
> Step 3 . Update the monitor node:
>       sudo apt-get update
> 
> sudo-apt-get update will automatically update all the ceph packages and
> there is no need of running "ceph-deploy install...."
> So, I would like to see "ceph-deploy install --no-adjust-repos ... " before
> apt-get update.
> 
> (Same thing implies for OSD and RGW steps also)

Done.

> 
> 2. Gateway Node
> ------------------------
> 
> a.) In Installation guide ,we have used  "sudo service radosgw restart
> id=rgw.<short-hostname> " to restart gateway daemon whereas in Upgrade guide
> we have used "sudo /etc/init.d/radosgw stop",
> There is no consistency across the documents.
> Lets have "sudo service radosgw restart id=rgw.<short-hostname> " command
> whenever it requires restarting a gateway service
> (Lets make sure the same consistency is maintained for OSD , MON as well)

To restart gateway daemon, I had used "sudo service radosgw restart". As per your suggestion and to make it consistent across docs, I changed it to "sudo service radosgw restart id=rgw.<short-hostname>".

Yes, I used "sudo /etc/init.d/radosgw stop" for stopping radosgw in 1.2.3 in step 2 before upgrading to civetweb based radosgw as I wasn't sure if in 1.2.3 radosgw could be stopped with id as well like we are doing now for 1.3 with "id=rgw.<short-hostname>".

Let me know if this true for 1.2.3 as well and I will use "id=rgw.<short-hostname>" for stopping radosgw before the upgrade.

For now, it is "sudo /etc/init.d/radosgw stop" in step 2 before reinstall of rgw for 1.3

> 
> b). For federated deployments, from the Ceph Object Gateway node, execute:
>      sudo yum install radosgw-agent.
> This must be changed to "  sudo apt-get install radosgw-agent." as yum
> install is used for RHEL.

Done.

> 
> 3. After upgrading MON and OSD to 1.3.1 using Online Repos " Connect
> Monitor/OSD Hosts to Calamari " step is missing. This updates the Calamari
> client packages on the Hosts.
> 

Done. However, I had a feeling that after adding "MON" or "OSD" online repos,  "sudo apt-get update" and "sudo apt-get upgrade" would upgrade the existing "salt-minion" package in MON or OSD node using the newly added mon.list or osd.list repos. I don't know if the mon and osd repos have the "salt-minion" packages or not. Do you have any info on it?

> 
> Upgrading v1.3.0 to v1.3.1 for online repo based installations
> -----------------------------------------------------------------------------
> --------
> 
> 1. While upgrading to 1.3.1 from 1.3.0, removing the repositories is not
> required.
>    latest packeges with  " dot releases" can be obtained by just runnning
> "apt-get update" or "ceph-deploy Install ... " once the packages are Hosted
> in the Site. 
> Repoving repos are required only while upgrading from 1.2.3.
> Also, there won't be any calamari-minion.list,  ceph.list,
> calamari-server.list, ceph-deploy.list in /etc/apt/sources.list.d/. 
> Hence, this step can be removed. 
> Let's confirm this with Alfredo or ktdreyer. As Online repos is available
> from 1.3.1 and 1.3.0 has only ISO based installation.

I think it is required. 1.3.0 was a ISO based install. Whether a fresh install or upgrade from 1.2.3 to 1.3.0, the ice_setup program that gets executed creates one cephdeploy.conf file in Ceph working directory, one hidden .cephdeploy.conf file in user's home directory, calamari-server.list, ceph-deploy.list and ceph.list files and installs upgraded versions of ceph-deploy, calamari-server and calamari-clients. 

If these two cephdeploy.conf files and the .list files are not removed, even if the new online installer, calamari and tools repo are set, still ceph-deploy will use the old cephdeploy.conf files for fetching the packages. So, we have to remove these files.

A "old packages getting installed" situation was faced when you guys had tested the online repo based upgrade procedure for 1.2.3 to 1.3.0.

I discussed this with Ken as well which he has commented on Comment 5.

Also, I have added step to upgrade the gateway node before starting the gateway daemon. I don't know if that will be required or not. Let me know if that needs to be removed.

I have made the changes in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/commit/58b94519ac9db2b5ffdbaf99909729091a32145f

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc

Please let me know if any change is required.

Comment 7 Hemanth Kumar 2015-11-02 13:52:13 UTC
> > 2. Gateway Node
> > ------------------------
> > 
> > a.) In Installation guide ,we have used  "sudo service radosgw restart
> > id=rgw.<short-hostname> " to restart gateway daemon whereas in Upgrade guide
> > we have used "sudo /etc/init.d/radosgw stop",
> > There is no consistency across the documents.
> > Lets have "sudo service radosgw restart id=rgw.<short-hostname> " command
> > whenever it requires restarting a gateway service
> > (Lets make sure the same consistency is maintained for OSD , MON as well)
> 
> To restart gateway daemon, I had used "sudo service radosgw restart". As per
> your suggestion and to make it consistent across docs, I changed it to "sudo
> service radosgw restart id=rgw.<short-hostname>".
> 
> Yes, I used "sudo /etc/init.d/radosgw stop" for stopping radosgw in 1.2.3 in
> step 2 before upgrading to civetweb based radosgw as I wasn't sure if in
> 1.2.3 radosgw could be stopped with id as well like we are doing now for 1.3
> with "id=rgw.<short-hostname>".
> 
> Let me know if this true for 1.2.3 as well and I will use
> "id=rgw.<short-hostname>" for stopping radosgw before the upgrade.
> 
> For now, it is "sudo /etc/init.d/radosgw stop" in step 2 before reinstall of
> rgw for 1.3


--- Correct Nilam, No need to change the command here..
To stop the gateway service service in 1.2.3 "sudo /etc/init.d/radosgw stop" is required, and in 1.3 we need to use "sudo service radosgw restart id=rgw.<short-hostname>"

> > 3. After upgrading MON and OSD to 1.3.1 using Online Repos " Connect
> > Monitor/OSD Hosts to Calamari " step is missing. This updates the Calamari
> > client packages on the Hosts.
> > 
> 
> Done. However, I had a feeling that after adding "MON" or "OSD" online
> repos,  "sudo apt-get update" and "sudo apt-get upgrade" would upgrade the
> existing "salt-minion" package in MON or OSD node using the newly added
> mon.list or osd.list repos. I don't know if the mon and osd repos have the
> "salt-minion" packages or not. Do you have any info on it?

-- Yes, It does upgrade with "apt-get update" command, even ceph packages gets upgraded with the same command. If we do so, ceph-deploy or calamari connect will be of no use, So, lets use the commands which we have to update the required ceph and its dependent packages and then execute "apt-get update" just in case if any dependencies needs an update.


> > Upgrading v1.3.0 to v1.3.1 for online repo based installations
> > -----------------------------------------------------------------------------
> > --------
> > 
> > 1. While upgrading to 1.3.1 from 1.3.0, removing the repositories is not
> > required.
> >    latest packeges with  " dot releases" can be obtained by just runnning
> > "apt-get update" or "ceph-deploy Install ... " once the packages are Hosted
> > in the Site. 
> > Repoving repos are required only while upgrading from 1.2.3.
> > Also, there won't be any calamari-minion.list,  ceph.list,
> > calamari-server.list, ceph-deploy.list in /etc/apt/sources.list.d/. 
> > Hence, this step can be removed. 
> > Let's confirm this with Alfredo or ktdreyer. As Online repos is available
> > from 1.3.1 and 1.3.0 has only ISO based installation.
> 
> I think it is required. 1.3.0 was a ISO based install. Whether a fresh
> install or upgrade from 1.2.3 to 1.3.0, the ice_setup program that gets
> executed creates one cephdeploy.conf file in Ceph working directory, one
> hidden .cephdeploy.conf file in user's home directory, calamari-server.list,
> ceph-deploy.list and ceph.list files and installs upgraded versions of
> ceph-deploy, calamari-server and calamari-clients. 

I'm not talking about cephdeploy.conf and .cephdeploy.conf., Removing these files are required.
But, removing the repo file when upgrading from 1.3.0 to 1.3.1 needs few modification,. I have noted the changes below..


> If these two cephdeploy.conf files and the .list files are not removed, even
> if the new online installer, calamari and tools repo are set, still
> ceph-deploy will use the old cephdeploy.conf files for fetching the
> packages. So, we have to remove these files.
> 
> A "old packages getting installed" situation was faced when you guys had
> tested the online repo based upgrade procedure for 1.2.3 to 1.3.0.
> 
> I discussed this with Ken as well which he has commented on Comment 5.
> 
> Also, I have added step to upgrade the gateway node before starting the
> gateway daemon. I don't know if that will be required or not. Let me know if
> that needs to be removed.
> 
> I have made the changes in the following commit:
> 
> https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/commit/58b94519ac9db2b5ffdbaf99909729091a32145f
> 
> See:
> https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc
> 
> Please let me know if any change is required.



1. Upgrading v1.3.0 to v1.3.1 for ISO based installations
---------------------------------------------------------
a. Admin Node
Remove existing Ceph repositories:
cd /etc/apt/sources.list.d/
sudo rm -rf calamari-server.list ceph-deploy.list ceph.list
-- The above step can be removed, when 1.3.0 is installed or upgraded from 1.2.3 these files will not be available.
I have tested this and it works..

b. Monitor Node
"sudo rm -rf calamari-minion.list ceph.list"
the above repo file doesnt exist in the location, instead change it to
"sudo rm -rf ceph-mon.repo"
SImilarly for OSD also., change it to "sudo rm -rf ceph-osd.repo"

c. Gateway Node(implies for both ISO and Online Upgrades)
Since we ask the customer to either use ceph-mon or ceph-osd repo to install rgw in 1.3.0 Install guide. So, in 1.3.1 upgrade guide we need to ask them to remove the existing repos depending on which repo they have used.

2. Upgrading v1.3.0 to v1.3.1 for online repo based installations
-----------------------------------------------------------------

a. Admin Node
Remove existing Ceph repositories:
cd /etc/apt/sources.list.d/
sudo rm -rf calamari-server.list ceph-deploy.list ceph.list
-- We need to remove Calamari.list, Installer.list and Tools.list as these repo files will be pointing to the admin Node. and we will re-create the repo which points to the online repo site.

Similarly remove existing Mon(ceph-mon.list) and OSD(ceph-osd.list) repos

Comment 8 Nilamdyuti 2015-11-03 12:39:03 UTC
(In reply to Hemanth Kumar from comment #7)
> > > 2. Gateway Node
> > > ------------------------
> > > 
> > > a.) In Installation guide ,we have used  "sudo service radosgw restart
> > > id=rgw.<short-hostname> " to restart gateway daemon whereas in Upgrade guide
> > > we have used "sudo /etc/init.d/radosgw stop",
> > > There is no consistency across the documents.
> > > Lets have "sudo service radosgw restart id=rgw.<short-hostname> " command
> > > whenever it requires restarting a gateway service
> > > (Lets make sure the same consistency is maintained for OSD , MON as well)
> > 
> > To restart gateway daemon, I had used "sudo service radosgw restart". As per
> > your suggestion and to make it consistent across docs, I changed it to "sudo
> > service radosgw restart id=rgw.<short-hostname>".
> > 
> > Yes, I used "sudo /etc/init.d/radosgw stop" for stopping radosgw in 1.2.3 in
> > step 2 before upgrading to civetweb based radosgw as I wasn't sure if in
> > 1.2.3 radosgw could be stopped with id as well like we are doing now for 1.3
> > with "id=rgw.<short-hostname>".
> > 
> > Let me know if this true for 1.2.3 as well and I will use
> > "id=rgw.<short-hostname>" for stopping radosgw before the upgrade.
> > 
> > For now, it is "sudo /etc/init.d/radosgw stop" in step 2 before reinstall of
> > rgw for 1.3
> 
> 
> --- Correct Nilam, No need to change the command here..
> To stop the gateway service service in 1.2.3 "sudo /etc/init.d/radosgw stop"
> is required, and in 1.3 we need to use "sudo service radosgw restart
> id=rgw.<short-hostname>"

Okay.

> 
> > > 3. After upgrading MON and OSD to 1.3.1 using Online Repos " Connect
> > > Monitor/OSD Hosts to Calamari " step is missing. This updates the Calamari
> > > client packages on the Hosts.
> > > 
> > 
> > Done. However, I had a feeling that after adding "MON" or "OSD" online
> > repos,  "sudo apt-get update" and "sudo apt-get upgrade" would upgrade the
> > existing "salt-minion" package in MON or OSD node using the newly added
> > mon.list or osd.list repos. I don't know if the mon and osd repos have the
> > "salt-minion" packages or not. Do you have any info on it?
> 
> -- Yes, It does upgrade with "apt-get update" command, even ceph packages
> gets upgraded with the same command. If we do so, ceph-deploy or calamari
> connect will be of no use, So, lets use the commands which we have to update
> the required ceph and its dependent packages and then execute "apt-get
> update" just in case if any dependencies needs an update.
> 

Okay.

> 
> > > Upgrading v1.3.0 to v1.3.1 for online repo based installations
> > > -----------------------------------------------------------------------------
> > > --------
> > > 
> > > 1. While upgrading to 1.3.1 from 1.3.0, removing the repositories is not
> > > required.
> > >    latest packeges with  " dot releases" can be obtained by just runnning
> > > "apt-get update" or "ceph-deploy Install ... " once the packages are Hosted
> > > in the Site. 
> > > Repoving repos are required only while upgrading from 1.2.3.
> > > Also, there won't be any calamari-minion.list,  ceph.list,
> > > calamari-server.list, ceph-deploy.list in /etc/apt/sources.list.d/. 
> > > Hence, this step can be removed. 
> > > Let's confirm this with Alfredo or ktdreyer. As Online repos is available
> > > from 1.3.1 and 1.3.0 has only ISO based installation.
> > 
> > I think it is required. 1.3.0 was a ISO based install. Whether a fresh
> > install or upgrade from 1.2.3 to 1.3.0, the ice_setup program that gets
> > executed creates one cephdeploy.conf file in Ceph working directory, one
> > hidden .cephdeploy.conf file in user's home directory, calamari-server.list,
> > ceph-deploy.list and ceph.list files and installs upgraded versions of
> > ceph-deploy, calamari-server and calamari-clients. 
> 
> I'm not talking about cephdeploy.conf and .cephdeploy.conf., Removing these
> files are required.
> But, removing the repo file when upgrading from 1.3.0 to 1.3.1 needs few
> modification,. I have noted the changes below..
> 
> 
> > If these two cephdeploy.conf files and the .list files are not removed, even
> > if the new online installer, calamari and tools repo are set, still
> > ceph-deploy will use the old cephdeploy.conf files for fetching the
> > packages. So, we have to remove these files.
> > 
> > A "old packages getting installed" situation was faced when you guys had
> > tested the online repo based upgrade procedure for 1.2.3 to 1.3.0.
> > 
> > I discussed this with Ken as well which he has commented on Comment 5.
> > 
> > Also, I have added step to upgrade the gateway node before starting the
> > gateway daemon. I don't know if that will be required or not. Let me know if
> > that needs to be removed.
> > 
> > I have made the changes in the following commit:
> > 
> > https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> > guide-ubuntu/commit/58b94519ac9db2b5ffdbaf99909729091a32145f
> > 
> > See:
> > https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> > guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc
> > 
> > Please let me know if any change is required.
> 
> 
> 
> 1. Upgrading v1.3.0 to v1.3.1 for ISO based installations
> ---------------------------------------------------------
> a. Admin Node
> Remove existing Ceph repositories:
> cd /etc/apt/sources.list.d/
> sudo rm -rf calamari-server.list ceph-deploy.list ceph.list
> -- The above step can be removed, when 1.3.0 is installed or upgraded from
> 1.2.3 these files will not be available.
> I have tested this and it works..

Replaced the step with "sudo rm -rf Calamari.list Installer.list Tools.list" These files are created by ice_setup in 1.3.0 which you confirmed in our chat in IRC.

> 
> b. Monitor Node
> "sudo rm -rf calamari-minion.list ceph.list"
> the above repo file doesnt exist in the location, instead change it to
> "sudo rm -rf ceph-mon.repo"
> SImilarly for OSD also., change it to "sudo rm -rf ceph-osd.repo"

Done. Changed to "sudo rm -rf ceph-mon.list" for MON and "sudo rm -rf ceph-osd.list" for OSD.

> 
> c. Gateway Node(implies for both ISO and Online Upgrades)
> Since we ask the customer to either use ceph-mon or ceph-osd repo to install
> rgw in 1.3.0 Install guide. So, in 1.3.1 upgrade guide we need to ask them
> to remove the existing repos depending on which repo they have used.

Done.

> 
> 2. Upgrading v1.3.0 to v1.3.1 for online repo based installations
> -----------------------------------------------------------------
> 
> a. Admin Node
> Remove existing Ceph repositories:
> cd /etc/apt/sources.list.d/
> sudo rm -rf calamari-server.list ceph-deploy.list ceph.list
> -- We need to remove Calamari.list, Installer.list and Tools.list as these
> repo files will be pointing to the admin Node. and we will re-create the
> repo which points to the online repo site.
> 
> Similarly remove existing Mon(ceph-mon.list) and OSD(ceph-osd.list) repos

Done.

Suggested changes have been made in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/commit/70b0bbafd764d1ca55c09ffd425c787b30be86d1

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc

Let me know if the changes look good to you.

Comment 9 Nilamdyuti 2015-11-05 19:30:31 UTC
Added some more changes for 1.3.1 Ubuntu which being similar to RHEL was additionally requested in the RHEL tracker bug along with changes for RHEL.

The changes are in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/commit/9b73f4e840a35df2505aabe54dcb8a370a8aa989

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/red-hat-ceph-storage-upgrade.adoc

Comment 10 shylesh 2015-11-07 18:28:04 UTC
Hi Nilam,

In the document https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/calamari.adoc , section: "Set Online Ceph Repositories" before running the ceph-deploy command, user has to be created otherwise ceph-deploy gets permission denied error. So could you please change the order i.e. before running ceph-deploy commands user creation has to be done and login should be made passwordless then we can run ceph-deploy command.

Let me know your thoughts.

Comment 11 shylesh 2015-11-07 19:43:20 UTC
Hi Nilam,

Since we are following 1.3 document for object gateway on ubuntu running the command "ceph-deploy repo ceph-mon <gateway-node> " fails with 

ceph@magna104:~/ceph-config$ ceph-deploy repo ceph-mon  magna110
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy repo ceph-mon magna110
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function repo at 0x7f2766fafc08>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  remove                        : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f27666d5b90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['magna110']
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.cli][INFO  ]  repo_name                     : ceph-mon
[ceph_deploy.repo][DEBUG ] Detecting platform for host magna110 ...
[magna110][DEBUG ] connection detected need for sudo
[magna110][DEBUG ] connected to host: magna110
[magna110][DEBUG ] detect platform information from remote host
[magna110][DEBUG ] detect machine type
[ceph_deploy.repo][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69, in newfunc
[ceph_deploy][ERROR ]     return f(*a, **kw)
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 169, in _main
[ceph_deploy][ERROR ]     return args.func(args)
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/repo.py", line 73, in repo
[ceph_deploy][ERROR ]     install_repo(distro, args, cd_conf, rlogger)
[ceph_deploy][ERROR ]   File "/usr/lib/python2.7/dist-packages/ceph_deploy/repo.py", line 28, in install_repo
[ceph_deploy][ERROR ]     repo_url = repo_url.strip('/')  # Remove trailing slashes
[ceph_deploy][ERROR ] AttributeError: 'NoneType' object has no attribute 'strip'
[ceph_deploy][ERROR ]



so I followed the command "ceph-deploy install --repo --release=ceph-osd <ceph-node>[<ceph-node> ...]" then it works. Could you please confirm this with someone while writing 1.3.1 object gateway for ubuntu.

Comment 12 Nilamdyuti 2015-11-09 12:02:23 UTC
(In reply to shylesh from comment #10)
> Hi Nilam,
> 
> In the document
> https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/blob/devel/calamari.adoc , section: "Set Online Ceph
> Repositories" before running the ceph-deploy command, user has to be created
> otherwise ceph-deploy gets permission denied error. So could you please
> change the order i.e. before running ceph-deploy commands user creation has
> to be done and login should be made passwordless then we can run ceph-deploy
> command.
> 
> Let me know your thoughts.

Hi Hemanth,

Yes, you are right. I have moved the "Set Online repos" step to the right place as the last step of "Pre-installation".

Fixed in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/commit/ef1657146499bcb415253e8a970c9dd0361d7af0

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/calamari.adoc

Let me know, if any more change is required.

Comment 13 Nilamdyuti 2015-11-09 14:09:34 UTC
(In reply to shylesh from comment #11)
> Hi Nilam,
> 
> Since we are following 1.3 document for object gateway on ubuntu running the
> command "ceph-deploy repo ceph-mon <gateway-node> " fails with 
> 
> ceph@magna104:~/ceph-config$ ceph-deploy repo ceph-mon  magna110
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/ceph/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.27.3): /usr/bin/ceph-deploy repo
> ceph-mon magna110
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username                      : None
> [ceph_deploy.cli][INFO  ]  func                          : <function repo at
> 0x7f2766fafc08>
> [ceph_deploy.cli][INFO  ]  verbose                       : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
> [ceph_deploy.cli][INFO  ]  quiet                         : False
> [ceph_deploy.cli][INFO  ]  remove                        : False
> [ceph_deploy.cli][INFO  ]  cd_conf                       :
> <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f27666d5b90>
> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
> [ceph_deploy.cli][INFO  ]  host                          : ['magna110']
> [ceph_deploy.cli][INFO  ]  repo_url                      : None
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.cli][INFO  ]  gpg_url                       : None
> [ceph_deploy.cli][INFO  ]  repo_name                     : ceph-mon
> [ceph_deploy.repo][DEBUG ] Detecting platform for host magna110 ...
> [magna110][DEBUG ] connection detected need for sudo
> [magna110][DEBUG ] connected to host: magna110
> [magna110][DEBUG ] detect platform information from remote host
> [magna110][DEBUG ] detect machine type
> [ceph_deploy.repo][INFO  ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy][ERROR ] Traceback (most recent call last):
> [ceph_deploy][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py", line 69,
> in newfunc
> [ceph_deploy][ERROR ]     return f(*a, **kw)
> [ceph_deploy][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 169, in _main
> [ceph_deploy][ERROR ]     return args.func(args)
> [ceph_deploy][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/repo.py", line 73, in repo
> [ceph_deploy][ERROR ]     install_repo(distro, args, cd_conf, rlogger)
> [ceph_deploy][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/repo.py", line 28, in
> install_repo
> [ceph_deploy][ERROR ]     repo_url = repo_url.strip('/')  # Remove trailing
> slashes
> [ceph_deploy][ERROR ] AttributeError: 'NoneType' object has no attribute
> 'strip'
> [ceph_deploy][ERROR ]
> 
> 
> 
> so I followed the command "ceph-deploy install --repo --release=ceph-osd
> <ceph-node>[<ceph-node> ...]" then it works. Could you please confirm this
> with someone while writing 1.3.1 object gateway for ubuntu.

Hi Shylesh,

Alfredo is looking into it. I feel there is some issue in the build. "--repo", "--release" are old arguments for ceph-deploy which was used till ceph-deploy v1.5.23. Since, ceph-deploy v1.5.25, "repo" option was introduced and it worked well. So, ideally with this latest version 1.5.27.3 in 1.3.1 build, it should have worked with "repo", instead it is working with "--repo", "--release" arguments from older build. This is weird.

For 1.3.1 Object gateway for Ubuntu, the only change would be a step to add Tools repo in gateway node (for online repo based install). The current ISO based method for installing repository in gateway node i.e, "ceph-deploy repo ceph-mon|ceph-osd <gateway-node>" would remain the same.

I know, the repo install in gateway node is working for you with the command "ceph-deploy install --repo --release=ceph-osd <ceph-node>[<ceph-node> ...]" but I feel this should not be the documented method. If in 1.3.0 with ceph-deploy 1.5.25 the command "ceph-deploy repo ceph-mon <gateway-node>" works, it is very much expected that in 1.3.1 with ceph-deploy v1.5.27.3, the command "ceph-deploy repo ceph-mon <gateway-node>" should work.

Lets see if the issue gets fixed.

Comment 14 Alfredo Deza 2015-11-10 13:41:27 UTC
The reason that is breaking is because the GPG Url is not being passed in.

Are we not using GPG urls for these repos? If so then I would need to change the logic to accommodate for that.

Comment 15 Ken Dreyer (Red Hat) 2015-11-10 16:38:59 UTC
If the customer is installing Ceph via an ISO on the Calamari admin node, they shouldn't need to explicitly specify `--gpg-url` to ceph-deploy on the command-line, because ice_setup should be handling that.

If the customer is installing Ceph via the online repositories, --gpg-url should always be used.

Comment 16 Federico Lucifredi 2015-11-10 22:30:41 UTC
Could we document to import the Red Hat key first while setting up on Ubuntu, so that specifying the --gpg-url parameter every time is not required?

Comment 17 Ken Dreyer (Red Hat) 2015-11-10 23:23:39 UTC
(In reply to Federico Lucifredi from comment #16)
> Could we document to import the Red Hat key first while setting up on
> Ubuntu, so that specifying the --gpg-url parameter every time is not
> required?

The idea behind `ceph-deploy repo` is that this single setups up both the repository and the GPG key at the same time. So it simplifies the user experience; the user doesn't have to SSH to each node and run `apt-key add <key-url>`, because `ceph-deploy repo` should do it for them.

The problem Shylesh found in Comment #11 is that `ceph-deploy repo` crashes in the following scenario:

1) The user does not use a cephdeploy.conf file with repository information, and
2) The user does not pass the `--gpg-url` and `--repo-url` flags to ceph-deploy,
3) The user does not use the (undocumented) CEPH_DEPLOY_REPO_URL and CEPH_DEPLOY_GPG_URL settings.

ceph-deploy will need to be patched so that it prints a message to the user in this scenario, instead of crashing with a stack trace.

Comment 24 Nilamdyuti 2015-11-12 17:39:23 UTC
The step to add online "Tools" repo in gateway node was already there in the 1.3.1 Install/Upgrade docs for Ubuntu.

I had also added this step to RHCS v1.3 Object Gateway Guide for Ubuntu 3 days ago in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-object-gateway-ubuntu/commit/baf1585eb3f8d9e735f5640981916e13333696da

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-object-gateway-ubuntu/blob/v1.3/installation.adoc#install-repository

Comment 25 Hemanth Kumar 2015-11-17 19:25:48 UTC
One last change Nilam

Installing Ceph CLI  has been mentioned twice in "https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/quick-ceph-deploy.adoc"

Once in the "Install Ceph (Online Repositories)" and again in Section "Make your Calamari Admin Node a Ceph Admin Node"

Remove it from "Install Ceph (Online Repositories)".

Comment 26 Nilamdyuti 2015-11-18 11:36:08 UTC
(In reply to Hemanth Kumar from comment #25)
> One last change Nilam
> 
> Installing Ceph CLI  has been mentioned twice in
> "https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-
> guide-ubuntu/blob/devel/quick-ceph-deploy.adoc"
> 
> Once in the "Install Ceph (Online Repositories)" and again in Section "Make
> your Calamari Admin Node a Ceph Admin Node"
> 
> Remove it from "Install Ceph (Online Repositories)".

Done. :)

Fixed in the following commit:

https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/commit/73149de2d66db6b376898e84dd9248566ea547ed

See: https://gitlab.cee.redhat.com/ngoswami/red-hat-ceph-storage-installation-guide-ubuntu/blob/devel/quick-ceph-deploy.adoc

Comment 27 Hemanth Kumar 2015-11-18 19:04:06 UTC
The Doc looks perfect now.!!
Thanks Nilam,:-)

Moving to Verified State

Comment 28 Anjana Suparna Sriram 2015-12-18 09:59:05 UTC
Fixed for 1.3.1 Release.


Note You need to log in before you can comment on or make changes to this bug.