.Update to the `ceph-disk` Unit Files
Previously, the transition to containerized Ceph left some "ceph-disk" unit files. The files were harmless, but appeared as failing, which could be distressing to the operator. With this update, executing the "switch-from-non-containerized-to-containerized-ceph-daemons.yml" playbook disables the "ceph-disk" unit files too.
Created attachment 1436089[details]
journalctl logs and ceph-osd status
Description of problem:
CU asked us to investigate issue with his RHOSP12 + ceph environment. He reported the following issue: all ceph-disk units are in failed state.
I have tried to troubleshoot the issue for one specific ceph disk sdv (but the picture is the same for other ones). Please find the extract from journalctl logs and ``systemctl status --all`` command in attachments (to keep description shorter).
It looks like the following ceph-ansible v3.0.27 play masked all ceph-osd services and broken ceph-disk systemd units:
- name: stop non-containerized ceph osd(s)
systemd:
name: "{{ item }}"
state: stopped
enabled: no
masked: yes
with_items: "{{ running_osds.stdout_lines | default([])}}"
when: running_osds != []
I may be wrong with the clue above, but customers are still struggling, so please find additional information about customer's environment in comment #1.
Customer said that his ceph environment is running fine, but he is worried about failed systemd units. Please feel free to adjust severity if this problem is cosmetic.
I don't think this a real issue, we simply don't change the ceph-disk unit file. We probably should disable it too. However, this does not affect the ceph-osd units and the cluster should be fine.
Comment 8Guillaume Abrioux
2018-05-22 11:40:04 UTC
I'm not sure how I can help here, the only thing I can tell you is that the patch is present in v3.0.35 and above.
When it comes to how fix faster or release date please ask Ken.
Thanks
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2018:2177