Bug 1222153 - ceph-deploy rgw create command is broken
Summary: ceph-deploy rgw create command is broken
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Installer
Version: 1.3.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 1.3.0
Assignee: Ken Dreyer (Red Hat)
QA Contact: Tamil
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-15 22:08 UTC by Tamil
Modified: 2017-12-13 00:24 UTC (History)
9 users (show)

Fixed In Version: ceph-deploy-1.5.24-1.el7cp ceph-0.94.1-11.el7cp
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-24 15:53:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 11686 0 None None None Never
Red Hat Product Errata RHBA-2015:1183 0 normal SHIPPED_LIVE Ceph bug fix and enhancement update 2015-06-24 19:49:46 UTC

Description Tamil 2015-05-15 22:08:49 UTC
Description of problem:
ceph-deploy rgw create command is broken in 1.3.0, which means rgw cannot be installed/configured on stockwell nodes without this. 

Version-Release number of selected component (if applicable):
ceph-deploy 1.5.23

How reproducible:
always

Steps to Reproduce:
1.
2.
3.

Actual results:
[ubuntu@magna101 ceph-deploy]$ ceph-deploy --overwrite-conf rgw create magna101  
2:07 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
2:07 [ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy --overwrite-conf rgw create magna101
2:07 [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts magna101:rgw.magna101
2:07 [magna101][DEBUG ] connection detected need for sudo
2:07 [magna101][DEBUG ] connected to host: magna101
2:07 [magna101][DEBUG ] detect platform information from remote host
2:07 [magna101][DEBUG ] detect machine type
2:07 [ceph_deploy.rgw][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
2:07 [ceph_deploy.rgw][DEBUG ] remote host will use sysvinit
2:07 [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to magna101
2:07 [magna101][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
2:07 [magna101][DEBUG ] create path if it doesn't exist
2:07 [ceph_deploy.rgw][ERROR ] OSError: [Errno 2] No such file or directory: '/var/lib/ceph/radosgw/ceph-rgw.magna101'
2:07 [ceph_deploy][ERROR ] GenericError: Failed to create 1 RGWs


2:07 [ubuntu@magna101 ceph-deploy]$ ceph-deploy --overwrite-conf rgw create magna101:radosgw
2:07 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
2:07 [ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy --overwrite-conf rgw create magna101:radosgw
2:08 [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts magna101:radosgw
2:08 [magna101][DEBUG ] connection detected need for sudo
2:08 [magna101][DEBUG ] connected to host: magna101
2:08 [magna101][DEBUG ] detect platform information from remote host
2:08 [magna101][DEBUG ] detect machine type
2:08 [ceph_deploy.rgw][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
2:08 [ceph_deploy.rgw][DEBUG ] remote host will use sysvinit
2:08 [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to magna101
2:08 [magna101][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
2:08 [magna101][DEBUG ] create path if it doesn't exist
2:08 [ceph_deploy.rgw][ERROR ] OSError: [Errno 2] No such file or directory: '/var/lib/ceph/radosgw/ceph-radosgw

Expected results:

installation and configuration of rgw should get through with a radosgw daemon running.

Additional info:

Also, it would be nice to fix "ceph-deploy rgw --help" too. it seems to be vague about the subcommand right now.

Comment 3 Travis Rhoden 2015-05-18 21:17:19 UTC
It's worth noting that if you want 'ceph-deploy rgw create' to work on 1.3.0 (based on Hammer), you will need this change backported: https://github.com/ceph/ceph/pull/4606

Without it, the "ceph-radosgw" service doesn't look for civetweb daemons created by ceph-deploy in /var/lib/ceph/radosgw.

Comment 4 Ken Dreyer (Red Hat) 2015-05-18 21:21:12 UTC
(In reply to Travis Rhoden from comment #3)
> It's worth noting that if you want 'ceph-deploy rgw create' to work on 1.3.0
> (based on Hammer), you will need this change backported:
> https://github.com/ceph/ceph/pull/4606

Oh, right, thanks for the reminder! I'll be sure to get that backported to dist-git this week.

Comment 5 Ken Dreyer (Red Hat) 2015-05-19 19:21:00 UTC
ceph-deploy-1.5.24-1.el7cp is built in Brew and will be attached to the 1.3.0 errata today. This contains the fix on ceph-deploy's side.

We still need to pull in that patch set that Travis mentioned above in Comment 3 in order to fix this, though.

Comment 6 Ken Dreyer (Red Hat) 2015-05-20 01:49:58 UTC
(In reply to Travis Rhoden from comment #3)
> It's worth noting that if you want 'ceph-deploy rgw create' to work on 1.3.0
> (based on Hammer), you will need this change backported:
> https://github.com/ceph/ceph/pull/4606

Filed http://tracker.ceph.com/issues/11686 so we ensure that change gets backported to hammer upstream.

Comment 9 shilpa 2015-06-02 07:30:22 UTC
Tested on ceph-deploy-1.5.25-1.el7cp.noarch. I don't see any errors. But systemctl status shows active(exited). Not sure why.

# ceph-deploy --overwrite-conf rgw create hp-ms-01-c33
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /usr/bin/ceph-deploy --overwrite-conf rgw create hp-ms-01-c33
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts hp-ms-01-c33:rgw.hp-ms-01-c33
[hp-ms-01-c33][DEBUG ] connected to host: hp-ms-01-c33 
[hp-ms-01-c33][DEBUG ] detect platform information from remote host
[hp-ms-01-c33][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO  ] Distro info: Red Hat Enterprise Linux Server 7.1 Maipo
[ceph_deploy.rgw][DEBUG ] remote host will use sysvinit
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to hp-ms-01-c33
[hp-ms-01-c33][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[hp-ms-01-c33][DEBUG ] create path recursively if it doesn't exist
[hp-ms-01-c33][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.hp-ms-01-c33 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.hp-ms-01-c33/keyring
[hp-ms-01-c33][INFO  ] Running command: service ceph-radosgw start
[hp-ms-01-c33][DEBUG ] Starting ceph-radosgw (via systemctl):  [  OK  ]
[hp-ms-01-c33][INFO  ] Running command: systemctl enable ceph-radosgw
[hp-ms-01-c33][WARNIN] ceph-radosgw.service is not a native service, redirecting to /sbin/chkconfig.
[hp-ms-01-c33][WARNIN] Executing /sbin/chkconfig ceph-radosgw on
[hp-ms-01-c33][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[hp-ms-01-c33][WARNIN] using systemctl.
[hp-ms-01-c33][WARNIN] Possible reasons for having this kind of units are:
[hp-ms-01-c33][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[hp-ms-01-c33][WARNIN]    .wants/ or .requires/ directory.
[hp-ms-01-c33][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[hp-ms-01-c33][WARNIN]    a requirement dependency on it.
[hp-ms-01-c33][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[hp-ms-01-c33][WARNIN]    D-Bus, udev, scripted systemctl call, ...).

[root@hp-ms-01-c33 ceph-config]# service ceph-radosgw status
/bin/radosgw is not running.

#  systemctl status ceph-radosgw
ceph-radosgw.service - LSB: radosgw RESTful rados gateway
   Loaded: loaded (/etc/rc.d/init.d/ceph-radosgw)
   Active: active (exited) since Tue 2015-06-02 03:20:05 EDT; 1min 21s ago

Jun 02 03:20:05 hp-ms-01-c33.moonshot1.lab.eng.rdu.redhat.com ceph-radosgw[11927]: Starting client.rgw.hp-ms-01-c33...
Jun 02 03:20:05 hp-ms-01-c33.moonshot1.lab.eng.rdu.redhat.com ceph-radosgw[11927]: Running as unit run-11951.service.
Jun 02 03:20:05 hp-ms-01-c33.moonshot1.lab.eng.rdu.redhat.com systemd[1]: Started LSB: radosgw RESTful rados gateway.
Jun 02 03:21:13 hp-ms-01-c33.moonshot1.lab.eng.rdu.redhat.com systemd[1]: Started LSB: radosgw RESTful rados gateway.

Comment 10 shilpa 2015-06-02 15:43:39 UTC
Verified in  ceph-deploy-1.5.25-1.el7cp. The rgw create command goes through without any errors.

Comment 11 Ken Dreyer (Red Hat) 2015-06-03 19:11:51 UTC
(In reply to shilpa from comment #9)
> Tested on ceph-deploy-1.5.25-1.el7cp.noarch. I don't see any errors. But
> systemctl status shows active(exited). Not sure why.

This is not a good sign, and I want to be sure that you are able to successfully bring up a RGW node.

The RGW init script uses sudo as root to run the radosgw service.

You may receive an error while trying to start radosgw if requiretty is set by default on your RGW node. Disable it by executing `sudo visudo` and locate the Defaults requiretty setting. For example, if cephdeploy is the user name from the step of Create a Ceph Deploy User, set the following:

Defaults: root !requiretty
Defaults: cephdeploy !requiretty

Can you please test this on your RGW node?

If the service still fails, please provide the output of "systemctl status ceph-radosgw -l"

Comment 13 errata-xmlrpc 2015-06-24 15:53:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2015:1183


Note You need to log in before you can comment on or make changes to this bug.