Description of problem: https://github.com/ceph/ceph-ansible/issues/3128 Change required to "collect running osds and ceph-disk unit(s)": systemctl list-units | grep "loaded active" | grep -Eo 'ceph-osd@[0-9]{1,3}.service|ceph-disk@dev-[a-z]{3,4}[0-9]{1}.service'
{1,3} is just what I used to workaround the issue with 200 OSDs, but {1,} may be better to avoid capping it artificially again.
Present in https://github.com/ceph/ceph-ansible/releases/tag/v3.0.46
Hi, By looking at fix[1], we can observe that there has been just change in a Regular Expression as part of fix. I think we can check whether the updated RE can parse numbers above 99, unlike old RE. There was similar bug - BZ 1612854 , Which was verified in a similar manner. Regards, Vasishta Shastry QE, Ceph
As mentioned plan in comment 16, Added osd service names with IDs from 1-10 and 100-110 to a test file, tried to parse all names with old RE, only 1-10 were parsed, using new RE we could parse all service names. $ for i in {0..10};do echo ceph-osd@$i.service >> test_file ;done $ for i in {100..110};do echo ceph-osd@$i.service >> test_file ;done $ grep -Eo 'ceph-osd@[0-9]{1,2}.service' test_file ceph-osd . . ceph-osd $ grep -Eo 'ceph-osd@[0-9]+.service' test_file ceph-osd . . ceph-osd ceph-osd . . ceph-osd We are planning to move this BZ to VERIFIED state on 30 OCT morning (IST). Please let us know if there are any concerns/suggestions. Regards, Vasishta Shastry QE, Ceph
lgtm Bara thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3530