Bug 1966934 - [Ceph-ansible]: Add-osd.yml and site.yml fail to add new osds on existing nodes
Summary: [Ceph-ansible]: Add-osd.yml and site.yml fail to add new osds on existing nodes
Keywords:
Status: CLOSED DUPLICATE of bug 1896803
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.2z2
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks: 1896803
TreeView+ depends on / blocked
 
Reported: 2021-06-02 07:56 UTC by Ameena Suhani S H
Modified: 2021-06-09 14:07 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-09 14:07:41 UTC
Embargoed:


Attachments (Terms of Use)

Description Ameena Suhani S H 2021-06-02 07:56:32 UTC
Description of problem:
Add-osd.yml and site.yml fail to add new osds on existing osd nodes

Existing scenario:
ceph-ameena-1622559942991-node4-pool dedicated_devices="['/dev/vdd']" devices="['/dev/vdc']" osd_scenario="non-collocated"

New scenario:
ceph-ameena-1622559942991-node4-pool dedicated_devices="['/dev/vdd','/dev/vde']" devices="['/dev/vdc','/dev/vdb']" osd_scenario="non-collocated"

Playbook fails with the following error

 TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] ***********
2021-06-01 12:27:25,932 p=90302 u=cephuser n=ansible | fatal: [ceph-ameena-1622559942991-node4-pool]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - /dev/vdc
  - /dev/vdb
  - --db-devices
  - /dev/vdd
  - /dev/vde
  delta: '0:00:01.923813'
  end: '2021-06-01 12:27:25.895363'
  invocation:
    module_args:
      action: batch
      batch_devices:
      - /dev/vdc
      - /dev/vdb
      block_db_devices:
      - /dev/vdd
      - /dev/vde
      block_db_size: '-1'
      cluster: ceph
      crush_device_class: ''
      data: null
      data_vg: null
      db: null
      db_vg: null
      destroy: true
      dmcrypt: false
      journal: null
      journal_size: '1024'
      journal_vg: null
      objectstore: bluestore
      osd_fsid: null
      osds_per_device: 1
      report: false
      wal: null
      wal_devices: []
      wal_vg: null
  msg: non-zero return code
  rc: 1
  start: '2021-06-01 12:27:23.971550'
  stderr: |-
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 39, in __init__
        self.main(self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 151, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 42, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 307, in main
        self._get_explicit_strategy()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 317, in _get_explicit_strategy
        self._filter_devices()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 370, in _filter_devices
        raise RuntimeError(err.format(len(devs) - len(usable)))
    RuntimeError: 1 devices were filtered in non-interactive mode, bailing out
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


Version-Release number of selected component (if applicable):
ansible-2.9.22-1.el8ae.noarch
ceph-ansible-4.0.56-1.el8cp.noarch


How reproducible:
2/2

Steps to Reproduce:
1.Deploy 4.2z2 cluster with ceph-ameena-1622559942991-node4-pool dedicated_devices="['/dev/vdd']" devices="['/dev/vdc']" osd_scenario="non-collocated"
2. try to add new devices
ceph-ameena-1622559942991-node4-pool dedicated_devices="['/dev/vdd','/dev/vde']" devices="['/dev/vdc','/dev/vdb']" osd_scenario="non-collocated"

Actual results:
playbook fails and osds are not added

Expected results:
osd should be added successfully


Note You need to log in before you can comment on or make changes to this bug.