[cee/sd][ceph-volume] limit filter is not working when multiple osd service spec are deployed and getting the warning "cephadm [INF] Refuse to add /dev/nvme0n1 due to limit policy of <x>"
Description of problem:
- When trying to apply the following service specs, the second never get applied because the `limit` filter makes it try to pick devices that are already used by the first service spec:
~~~
service_type: osd
service_id: osd_fast_big
service_name: osd_noncollocated_with_nvme_big
placement:
label: osd
spec:
data_devices:
size: '18GB:21GB'
limit: 2
db_devices:
size: '14GB:16GB'
---
service_type: osd
service_id: osd_fast_small
service_name: osd_noncollocated_with_nvme_small
placement:
label: osd
spec:
data_devices:
size: '18GB:21GB'
db_devices:
size: '8GB:12GB'
limit: 2
~~~
Version-Release number of selected component (if applicable):
RHCS 5.x
Steps to Reproduce:
1. Install RHCS 5.
2. Create specification file like above.
3. Apply the specification.
Actual results:
Limit(number of disks that they match) is not working as expected.
Expected results:
Using "limit" with valid filters, it should limit the number of disks they match.
Comment 1RHEL Program Management
2023-02-02 17:40:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2023:0980