Bug 1783908 - Ceph-Ansible is picking RBD volume when OSD auto discovery is set to True.
Summary: Ceph-Ansible is picking RBD volume when OSD auto discovery is set to True.
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 4.0
Hardware: x86_64
OS: Linux
Target Milestone: rc
: 4.0
Assignee: Dimitri Savineau
QA Contact: Vasishta
Depends On:
TreeView+ depends on / blocked
Reported: 2019-12-16 09:09 UTC by Preethi
Modified: 2020-01-31 12:48 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-4.0.7-1.el8cp, ceph-ansible-4.0.7-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2020-01-31 12:48:25 UTC
Target Upstream Version:

Attachments (Terms of Use)
all.yml file (29.56 KB, text/plain)
2019-12-16 09:09 UTC, Preethi
no flags Details
ansible (1.89 MB, text/plain)
2019-12-16 09:11 UTC, Preethi
no flags Details
ansible_inventory (1.23 KB, text/plain)
2019-12-16 09:13 UTC, Preethi
no flags Details
osd.yml file (9.01 KB, text/plain)
2019-12-16 09:14 UTC, Preethi
no flags Details
updatelog (7.26 MB, text/plain)
2019-12-16 09:16 UTC, Preethi
no flags Details
yaml_changes (13.49 KB, application/vnd.oasis.opendocument.text)
2019-12-16 09:53 UTC, Preethi
no flags Details

System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 4863 0 None closed ceph-defaults: exclude rbd devices from discovery 2021-02-15 14:17:40 UTC
Github ceph ceph-ansible pull 4868 0 None closed ceph-defaults: exclude rbd devices from discovery (bp #4863) 2021-02-15 14:17:40 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:48:36 UTC

Description Preethi 2019-12-16 09:09:13 UTC
Created attachment 1645521 [details]
all.yml file

Description of problem:Ceph-Ansible is picking RBD volume when OSD auto discovery is set to True due to this site.yaml is getting failed.

Version-Release number of selected component (if applicable): 
Ceph version 14.2.4-24.el7cp (ca8a8d14ec42737621306723f03dc0fb958a4747) nautilus (stable)

How reproducible: Have a cluster with 3 nodes which has 9 OSDs, 3 MOns, 2 mgrs,

1 client, RGW, RBD and MDS installed.

Attached inventory log, all.yml and OSD.ymal files.
Steps to Reproduce:
1. Perform ceph-metrics and install dashboard on 3.3
2. trigger IOs on RBD using RBDbench
2. Upon scusseful installation. Perform upgrade from 3.3 to 4.0 
3. Install rolling_update.rml followed by site.yml. Observe the behaviour

Actual results: Site.yml fails with the below output:

   Traceback (most recent call last):
      File "/sbin/ceph-volume", line 9, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 38, in __init__
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/main.py", line 149, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python2.7/site-packages/ceph_volume/terminal.py", line 194, in dispatch
      File "/usr/lib/python2.7/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 320, in main
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py", line 303, in _get_strategy
        self.strategy = strategy.with_auto_devices(self.args, unused_devices)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 25, in with_auto_devices
        return cls(args, devices)
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 20, in __init__
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/strategies.py", line 30, in validate_compute
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 61, in validate
        self.data_devs, osds_per_device=self.osds_per_device
      File "/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/validators.py", line 15, in minimum_device_size
        raise RuntimeError(msg % (device_size, device.path))
    RuntimeError: Unable to use device 0.00 B /dev/rbd0, LVs would be smaller than 5GB
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
2019-12-13 10:47:48,170 p=497315 u=ubuntu |  NO MORE HOSTS LEFT ****************************************************************************************
2019-12-13 10:47:48,171 p=497315 u=ubuntu |  PLAY RECAP ************************************************************************************************
2019-12-13 10:47:48,171 p=497315 u=ubuntu |  magna079                   : ok=101  changed=2    unreachable=0    failed=0    skipped=173  rescued=0    ignored=0
2019-12-13 10:47:48,171 p=497315 u=ubuntu |  magna120                   : ok=105  changed=2    unreachable=0    failed=0    skipped=169  rescued=0    ignored=0

Additional info: attached relavant logs for issue analysis.

Comment 1 Preethi 2019-12-16 09:11:19 UTC
Created attachment 1645522 [details]

Comment 2 Preethi 2019-12-16 09:13:58 UTC
Created attachment 1645524 [details]

Comment 3 Preethi 2019-12-16 09:14:49 UTC
Created attachment 1645525 [details]
osd.yml file

Comment 4 Preethi 2019-12-16 09:16:53 UTC
Created attachment 1645526 [details]

Comment 5 Preethi 2019-12-16 09:53:42 UTC
Created attachment 1645530 [details]

Comment 11 errata-xmlrpc 2020-01-31 12:48:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.