Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1457612

Summary: Hyperconverge: Ceph OSD services are DISABLED by default after overcloud deploy
Product: Red Hat OpenStack Reporter: John Call <jcall>
Component: openstack-tripleo-heat-templatesAssignee: John Fulton <johfulto>
Status: CLOSED DUPLICATE QA Contact: Yogev Rabl <yrabl>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: jomurphy, kejones, mburns, rhel-osp-director-maint
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-07-02 19:05:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
templates and deployment command none

Description John Call 2017-06-01 03:24:28 UTC
Description of problem:
Deploying a hyperconverged (nova+ceph_osd) overcloud results in Ceph OSD services running, but disabled on reboot.  The OSD services should be enabled on reboot.


Version-Release number of selected component (if applicable):
RHOSP10


How reproducible:
Steps to Reproduce:
Templates and deploy command-line attached


Actual results:
Manually start OSD services and check their PID
[root@overcloud-compute-0 ~]# ps aux | grep osd
ceph       10949  3.2  1.2 1313180 396004 ?      Ssl  22:19   0:48 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph

Convert PID to systemd unit file (note disabled)
[root@overcloud-compute-0 ~]# systemctl status 10949
● ceph-osd - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-05-31 22:19:05 EDT; 24min ago
  Process: 10899 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
 Main PID: 10949 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd
           └─10949 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
<snip>


Searching for enabled ceph-osd units fails (no ceph-osd symlink)
[root@overcloud-compute-0 ~]# find /etc -type l | grep ceph
/etc/systemd/system/multi-user.target.wants/ceph-osd.target
/etc/systemd/system/multi-user.target.wants/ceph-mon.target
/etc/systemd/system/multi-user.target.wants/ceph.target
/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target
/etc/systemd/system/ceph.target.wants/ceph-osd.target
/etc/systemd/system/ceph.target.wants/ceph-mon.target
/etc/systemd/system/ceph.target.wants/ceph-radosgw.target
/etc/puppet/modules/ceph



Expected results:
Output of systemctl status ceph-osd@X shows enabled

Comment 1 John Call 2017-06-01 14:22:41 UTC
Created attachment 1284159 [details]
templates and deployment command

Comment 2 John Fulton 2017-07-02 19:05:55 UTC
Hi John,

An errata [1] was released shortly after this bug report was made, that fixes this issue under the conditions you are deploying as per your attachment (with ceph::profile::params::osds unset [2]); thus I've closed this bug as a duplicate of BZ 1442265 [3]. If you already have that errata [1] applied to a system that has the reported bug, then feel free to re-open the bug and attach the output of the following commands: 

rpm -qa | grep ceph | sort | uniq
grep runtime /usr/lib/python2.7/site-packages/ceph_disk/main.py 
md5sum /usr/lib/python2.7/site-packages/ceph_disk/main.py

  John

[1] https://access.redhat.com/errata/RHBA-2017:1497
[2] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/html-single/red_hat_ceph_storage_for_the_overcloud/#Mapping_the_Ceph_Storage_Node_Disk_Layout
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1442265

*** This bug has been marked as a duplicate of bug 1442265 ***