Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1485887

Summary: [UPDATES][OpenStack] Systemd ceph* units conflict with ceph* containers
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yurii Prokulevych <yprokule>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: medium Docs Contact:
Priority: low    
Version: 3.0CC: adeza, anharris, aschoen, ceph-eng-bugs, ceph-qe-bugs, gfidente, gmeno, hnallurv, icolle, kdreyer, mcornea, nthomas, sankarshan, seb, yprokule, yrabl
Target Milestone: rc   
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.0.0-0.1.rc7.el7cp Ubuntu: ceph-ansible_3.0.0~rc7-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-05 23:41:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yurii Prokulevych 2017-08-28 11:14:59 UTC
Description of problem:
-----------------------
After minor update of CephStorage nodes ceph containers fail to start, but non-containerized are started:
[root@ceph-0 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@ceph-0 ~]# systemctl status ceph*
● ceph-osd - Ceph OSD
   Loaded: loaded (/etc/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-08-25 08:20:28 UTC; 3 days ago
 Main PID: 22630 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd
           └─22630 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

● ceph-osd - Ceph OSD
   Loaded: loaded (/etc/systemd/system/ceph-osd@.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Mon 2017-08-28 11:07:05 UTC; 7s ago
  Process: 779147 ExecStop=/usr/bin/docker stop ceph-osd-ceph-0-dev%i (code=exited, status=1/FAILURE)
  Process: 778918 ExecStart=/usr/share/ceph-osd-run.sh %i (code=exited, status=1/FAILURE)
  Process: 778910 ExecStartPre=/usr/bin/docker rm -f ceph-osd-ceph-0-dev%i (code=exited, status=1/FAILURE)
  Process: 778904 ExecStartPre=/usr/bin/docker stop ceph-osd-ceph-0-dev%i (code=exited, status=1/FAILURE)
 Main PID: 778918 (code=exited, status=1/FAILURE)

Aug 28 11:07:05 ceph-0 systemd[1]: ceph-osd failed.

[root@ceph-0 ~]# docker ps -a
CONTAINER ID        IMAGE                                                   COMMAND             CREATED             STATUS                  PORTS               NAMES
6ad1e9f2279a        docker.io/ceph/daemon:tag-build-master-jewel-centos-7   "/entrypoint.sh"    3 days ago          Exited (0) 3 days ago                       ceph-osd-prepare-ceph-0-devdevvdb

[root@ceph-0 ~]# ceph status 
    cluster 755c263a-88d9-11e7-af00-5254004ae3d0
     health HEALTH_OK
     monmap e2: 3 mons at {controller-0=172.17.3.17:6789/0,controller-1=172.17.3.13:6789/0,controller-2=172.17.3.18:6789/0}
            election epoch 8, quorum 0,1,2 controller-1,controller-0,controller-2
     osdmap e24: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v6042: 224 pgs, 6 pools, 29001 kB data, 404 objects
            200 MB used, 104 GB / 104 GB avail
                 224 active+clean
  client io 17 B/s rd, 0 op/s rd, 0 op/s wr

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
ceph-selinux-10.2.7-28.el7cp.x86_64
python-cephfs-10.2.7-28.el7cp.x86_64
libcephfs1-10.2.7-28.el7cp.x86_64
puppet-ceph-2.3.1-0.20170805094345.868e6d6.el7ost.noarch
ceph-radosgw-10.2.7-28.el7cp.x86_64
ceph-common-10.2.7-28.el7cp.x86_64
ceph-mon-10.2.7-28.el7cp.x86_64
ceph-mds-10.2.7-28.el7cp.x86_64
ceph-osd-10.2.7-28.el7cp.x86_64
ceph-base-10.2.7-28.el7cp.x86_64

puppet-ceph-2.3.1-0.20170805094345.868e6d6.el7ost.noarch
ceph-ansible-3.0.0-0.1.rc3.el7cp.noarch

openstack-tripleo-heat-templates-7.0.0-0.20170805163048.el7ost.noarch

Steps to Reproduce:
-------------------
1. Follow https://etherpad.openstack.org/p/pike-update to perform minor update
2. Check ceph nodes

Comment 3 seb 2017-08-30 08:13:48 UTC
This is due to the ceph-osd package being installed on the system.

Comment 8 Harish NV Rao 2017-09-13 16:47:18 UTC
Yurii, will you be testing the fix for this BZ? If yes, request you to kindly provide qa_ack.

Comment 15 Giulio Fidente 2017-11-20 16:56:26 UTC
Yogev, I think this can be moved into VERIFIED; we should check if the systemd services for ceph-osd and ceph-mon are disabled when deploying ceph in containes

Comment 18 errata-xmlrpc 2017-12-05 23:41:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387

Comment 19 Red Hat Bugzilla 2023-09-15 00:03:38 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days