Bug 2166713 - [cee/sd][ceph-volume] limit filter is not working when multiple osd service spec are deployed and getting the warning "cephadm [INF] Refuse to add /dev/nvme0n1 due to limit policy of <x>"
Summary: [cee/sd][ceph-volume] limit filter is not working when multiple osd service s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 5.3z1
Assignee: Guillaume Abrioux
QA Contact: Mohit Bisht
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-02 17:40 UTC by Geo Jose
Modified: 2023-08-21 07:21 UTC (History)
10 users (show)

Fixed In Version: ceph-16.2.10-135.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-28 10:06:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 49969 0 None open drive_group: fix limit filter in drive_selection.selector 2023-02-17 02:46:38 UTC
Red Hat Issue Tracker RHCEPH-6074 0 None None None 2023-02-02 17:40:56 UTC
Red Hat Product Errata RHSA-2023:0980 0 None None None 2023-02-28 10:07:21 UTC

Description Geo Jose 2023-02-02 17:40:36 UTC
Description of problem:
- When trying to apply the following service specs, the second never get applied because the `limit` filter makes it try to pick devices that are already used by the first service spec:
~~~
service_type: osd
service_id: osd_fast_big
service_name: osd_noncollocated_with_nvme_big
placement:
  label: osd
spec:
  data_devices:
    size: '18GB:21GB'
    limit: 2
  db_devices:
    size: '14GB:16GB'
---
service_type: osd
service_id: osd_fast_small
service_name: osd_noncollocated_with_nvme_small
placement:
  label: osd
spec:
  data_devices:
    size: '18GB:21GB'
  db_devices:
    size: '8GB:12GB'
    limit: 2
~~~


Version-Release number of selected component (if applicable):
 RHCS 5.x

Steps to Reproduce:
1. Install RHCS 5.
2. Create specification file like above.
3. Apply the specification.

Actual results:
Limit(number of disks that they match) is not working as expected.

Expected results:
Using "limit" with valid filters, it should limit the number of disks they match.

Comment 1 RHEL Program Management 2023-02-02 17:40:46 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Geo Jose 2023-02-02 17:45:01 UTC
For limit filter is not working when multiple osd service spec are deployed. 
Upstream tracker: https://tracker.ceph.com/issues/58626
Pull request: https://github.com/ceph/ceph/pull/49969

Comment 16 errata-xmlrpc 2023-02-28 10:06:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980


Note You need to log in before you can comment on or make changes to this bug.