Description of problem: following pre-task in the rolling-upgrade.yml playbook fails with NVME devices, where the name of the service is like ceph-osd: 349 - name: get osd unit names - container 350 shell: systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z]+).service" https://github.com/ceph/ceph-ansible/blob/v3.2.8/infrastructure-playbooks/rolling_update.yml#L349 ----------- $ systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z]+).service" $ echo $? 1 ----------- there are two ways to modify it: A) modified the line to be able to match NVME devices directly: 350 shell: systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z]+|nvme.*).service" $ systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z]+|nvme.*).service" ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd B) Edit the reg exp for named containers after devices: 350 shell: systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z0-9]+).service" "[a-z0-9]+" should work for any combination of characters "a-z" and "0-9" of length 1 character and more, that should include nvmeXn1 too. $ systemctl list-units | grep -E "loaded * active" | grep -oE "ceph-osd@([0-9]{1,}|[a-z0-9]+).service" ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd ceph-osd Version-Release number of selected component (if applicable): Ceph-ansible-3.28 How reproducible: always Steps to Reproduce: 1. deploy containerized ceph with nvme devices 2. run rolling-upgrade.yml playbook 3. fails for nvme named sevices Actual results: fail Expected results: not fail Additional info:
correct ceph-ansible version is 3.2.8-1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:0911