Bug 1845668 - Purge fails when a device is clean ( Unable to proceed with non-existing device )
Summary: Purge fails when a device is clean ( Unable to proceed with non-existing devi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: z1
: 4.1
Assignee: Guillaume Abrioux
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-09 19:30 UTC by MG3
Modified: 2020-07-20 14:21 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-4.0.25-1.el8cp, ceph-ansible-4.0.25-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-20 14:21:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5434 0 None closed ceph_volume: make zap function idempotent 2021-01-15 18:31:09 UTC
Red Hat Product Errata RHSA-2020:3003 0 None None None 2020-07-20 14:21:59 UTC

Description MG3 2020-06-09 19:30:15 UTC
Description of problem: When running purge-container-cluster.yml - the following error happens even though the drive is empty


Version-Release number of selected component (if applicable): 4.0.14


How reproducible: everytime


Steps to Reproduce:
1. on an empty node/s run purge-container-cluster.yml
2. get error
3.

Actual results:
TASK [ceph-facts : set ntp service name for Debian family] **********************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:38 -0400 (0:00:00.209)       0:00:44.404 **********
skipping: [ceph1.example.com]
skipping: [ceph2.example.com]
skipping: [ceph3.example.com]

TASK [ceph-facts : set ntp service name for Red Hat family] *********************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:38 -0400 (0:00:00.193)       0:00:44.598 **********
ok: [ceph1.example.com]
ok: [ceph2.example.com]
ok: [ceph3.example.com]

TASK [ceph-facts : set chronyd daemon name for RedHat based OSs] ****************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:38 -0400 (0:00:00.210)       0:00:44.808 **********
ok: [ceph1.example.com]
ok: [ceph2.example.com]
ok: [ceph3.example.com]

TASK [ceph-facts : set chronyd daemon name for Ubuntu based OSs] ****************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:39 -0400 (0:00:00.198)       0:00:45.006 **********
skipping: [ceph1.example.com]
skipping: [ceph2.example.com]
skipping: [ceph3.example.com]

TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] ********************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:39 -0400 (0:00:00.194)       0:00:45.201 **********
ok: [ceph1.example.com]
ok: [ceph2.example.com]
ok: [ceph3.example.com]

TASK [get all the running osds] *************************************************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:39 -0400 (0:00:00.196)       0:00:45.397 **********
fatal: [ceph1.example.com]: FAILED! => changed=true
  cmd: |-
    systemctl list-units | grep 'loaded[[:space:]]\+active' | grep -oE "ceph-osd@([0-9]{1,2}|[a-z]+).service"
  delta: '0:00:00.042512'
  end: '2020-06-09 14:36:40.726320'
  msg: non-zero return code
  rc: 1
  start: '2020-06-09 14:36:40.683808'
  stderr: ''
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
...ignoring
fatal: [ceph2.example.com]: FAILED! => changed=true
  cmd: |-
    systemctl list-units | grep 'loaded[[:space:]]\+active' | grep -oE "ceph-osd@([0-9]{1,2}|[a-z]+).service"
  delta: '0:00:00.045600'
  end: '2020-06-09 14:36:40.796835'
  msg: non-zero return code
  rc: 1
  start: '2020-06-09 14:36:40.751235'
  stderr: ''
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
...ignoring
fatal: [ceph3.example.com]: FAILED! => changed=true
  cmd: |-
    systemctl list-units | grep 'loaded[[:space:]]\+active' | grep -oE "ceph-osd@([0-9]{1,2}|[a-z]+).service"
  delta: '0:00:00.045459'
  end: '2020-06-09 14:36:40.869787'
  msg: non-zero return code
  rc: 1
  start: '2020-06-09 14:36:40.824328'
  stderr: ''
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
...ignoring

TASK [disable ceph osd service] *************************************************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:40 -0400 (0:00:00.665)       0:00:46.063 **********

TASK [remove osd mountpoint tree] ***********************************************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:40 -0400 (0:00:00.177)       0:00:46.241 **********
ok: [ceph1.example.com]
ok: [ceph2.example.com]
fatal: [ceph3.example.com]: FAILED! => changed=false
  msg: 'rmtree failed: [Errno 16] Device or resource busy: ''ceph-50'''
...ignoring

TASK [default lvm_volumes if not defined] ***************************************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:40 -0400 (0:00:00.615)       0:00:46.856 **********
skipping: [ceph1.example.com]
skipping: [ceph2.example.com]
skipping: [ceph3.example.com]

TASK [zap and destroy osds created by ceph-volume with lvm_volumes] *************************************************************************************************************************************************************************
Tuesday 09 June 2020  14:36:41 -0400 (0:00:00.180)       0:00:47.037 **********
changed: [ceph3.example.com] => (item={'data': '/dev/sdc', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdd', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sde', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdf', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdg', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdh', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdi', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdj', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdk', 'crush_device_class': 'ssd'})
changed: [ceph3.example.com] => (item={'data': '/dev/sdl', 'crush_device_class': 'ssd'})
[WARNING]: The value False (type bool) in a string field was converted to 'False' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdm', 'data_vg': 'ceph-hdd-vg-sdm', 'db': 'ceph-journal-sdm', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  - ceph-ssd-vg-sda/ceph-journal-sdm
  delta: '0:00:01.982607'
  end: '2020-06-09 14:37:06.541016'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdm
    data_vg: ceph-hdd-vg-sdm
    db: ceph-journal-sdm
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:04.558409'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
changed: [ceph2.example.com] => (item={'data': '/dev/sdc', 'crush_device_class': 'ssd'})
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdn', 'data_vg': 'ceph-hdd-vg-sdn', 'db': 'ceph-journal-sdn', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  - ceph-ssd-vg-sda/ceph-journal-sdn
  delta: '0:00:01.655772'
  end: '2020-06-09 14:37:08.685537'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdn
    data_vg: ceph-hdd-vg-sdn
    db: ceph-journal-sdn
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:07.029765'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdo', 'data_vg': 'ceph-hdd-vg-sdo', 'db': 'ceph-journal-sdo', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  - ceph-ssd-vg-sda/ceph-journal-sdo
  delta: '0:00:01.637701'
  end: '2020-06-09 14:37:10.762005'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdo
    data_vg: ceph-hdd-vg-sdo
    db: ceph-journal-sdo
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:09.124304'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdp', 'data_vg': 'ceph-hdd-vg-sdp', 'db': 'ceph-journal-sdp', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  - ceph-ssd-vg-sda/ceph-journal-sdp
  delta: '0:00:01.648522'
  end: '2020-06-09 14:37:12.837092'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdp
    data_vg: ceph-hdd-vg-sdp
    db: ceph-journal-sdp
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:11.188570'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
changed: [ceph1.example.com] => (item={'data': '/dev/sdc', 'crush_device_class': 'ssd'})
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdq', 'data_vg': 'ceph-hdd-vg-sdq', 'db': 'ceph-journal-sdq', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  - ceph-ssd-vg-sdb/ceph-journal-sdq
  delta: '0:00:01.646312'
  end: '2020-06-09 14:37:14.926920'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdq
    data_vg: ceph-hdd-vg-sdq
    db: ceph-journal-sdq
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:13.280608'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sdr', 'data_vg': 'ceph-hdd-vg-sdr', 'db': 'ceph-journal-sdr', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  - ceph-ssd-vg-sdb/ceph-journal-sdr
  delta: '0:00:01.607752'
  end: '2020-06-09 14:37:16.971124'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdr
    data_vg: ceph-hdd-vg-sdr
    db: ceph-journal-sdr
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:15.363372'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph3.example.com] (item={'data': 'ceph-hdd-lv-sds', 'data_vg': 'ceph-hdd-vg-sds', 'db': 'ceph-journal-sds', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sds/ceph-hdd-lv-sds
  - ceph-ssd-vg-sdb/ceph-journal-sds
  delta: '0:00:01.672206'
  end: '2020-06-09 14:37:19.063963'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sds
    data_vg: ceph-hdd-vg-sds
    db: ceph-journal-sds
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:37:17.391757'
  stderr: |2-
     stderr: lsblk: ceph-hdd-vg-sds/ceph-hdd-lv-sds: not a block device
     stderr: blkid: error: ceph-hdd-vg-sds/ceph-hdd-lv-sds: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sds/ceph-hdd-lv-sds
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
changed: [ceph2.example.com] => (item={'data': '/dev/sdd', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdd', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sde', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sde', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdf', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdf', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdg', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdh', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdg', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdi', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdh', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdj', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdi', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdk', 'crush_device_class': 'ssd'})
changed: [ceph1.example.com] => (item={'data': '/dev/sdj', 'crush_device_class': 'ssd'})
changed: [ceph2.example.com] => (item={'data': '/dev/sdl', 'crush_device_class': 'ssd'})
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdm', 'data_vg': 'ceph-hdd-vg-sdm', 'db': 'ceph-journal-sdm', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  - ceph-ssd-vg-sda/ceph-journal-sdm
  delta: '0:00:22.333781'
  end: '2020-06-09 14:41:17.814731'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdm
    data_vg: ceph-hdd-vg-sdm
    db: ceph-journal-sdm
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:40:55.480950'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
changed: [ceph1.example.com] => (item={'data': '/dev/sdk', 'crush_device_class': 'ssd'})
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdn', 'data_vg': 'ceph-hdd-vg-sdn', 'db': 'ceph-journal-sdn', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  - ceph-ssd-vg-sda/ceph-journal-sdn
  delta: '0:00:22.258152'
  end: '2020-06-09 14:41:40.470403'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdn
    data_vg: ceph-hdd-vg-sdn
    db: ceph-journal-sdn
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:41:18.212251'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
changed: [ceph1.example.com] => (item={'data': '/dev/sdl', 'crush_device_class': 'ssd'})
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdo', 'data_vg': 'ceph-hdd-vg-sdo', 'db': 'ceph-journal-sdo', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  - ceph-ssd-vg-sda/ceph-journal-sdo
  delta: '0:00:22.231693'
  end: '2020-06-09 14:42:03.145009'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdo
    data_vg: ceph-hdd-vg-sdo
    db: ceph-journal-sdo
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:41:40.913316'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdm', 'data_vg': 'ceph-hdd-vg-sdm', 'db': 'ceph-journal-sdm', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  - ceph-ssd-vg-sda/ceph-journal-sdm
  delta: '0:00:28.032150'
  end: '2020-06-09 14:42:20.095100'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdm
    data_vg: ceph-hdd-vg-sdm
    db: ceph-journal-sdm
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:41:52.062950'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdm/ceph-hdd-lv-sdm
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdp', 'data_vg': 'ceph-hdd-vg-sdp', 'db': 'ceph-journal-sdp', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  - ceph-ssd-vg-sda/ceph-journal-sdp
  delta: '0:00:22.218527'
  end: '2020-06-09 14:42:25.772093'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdp
    data_vg: ceph-hdd-vg-sdp
    db: ceph-journal-sdp
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:42:03.553566'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdn', 'data_vg': 'ceph-hdd-vg-sdn', 'db': 'ceph-journal-sdn', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  - ceph-ssd-vg-sda/ceph-journal-sdn
  delta: '0:00:28.211641'
  end: '2020-06-09 14:42:48.751199'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdn
    data_vg: ceph-hdd-vg-sdn
    db: ceph-journal-sdn
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:42:20.539558'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdn/ceph-hdd-lv-sdn
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdq', 'data_vg': 'ceph-hdd-vg-sdq', 'db': 'ceph-journal-sdq', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  - ceph-ssd-vg-sdb/ceph-journal-sdq
  delta: '0:00:23.113422'
  end: '2020-06-09 14:42:49.346546'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdq
    data_vg: ceph-hdd-vg-sdq
    db: ceph-journal-sdq
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:42:26.233124'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sdr', 'data_vg': 'ceph-hdd-vg-sdr', 'db': 'ceph-journal-sdr', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  - ceph-ssd-vg-sdb/ceph-journal-sdr
  delta: '0:00:22.312539'
  end: '2020-06-09 14:43:12.084264'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdr
    data_vg: ceph-hdd-vg-sdr
    db: ceph-journal-sdr
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:42:49.771725'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdo', 'data_vg': 'ceph-hdd-vg-sdo', 'db': 'ceph-journal-sdo', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  - ceph-ssd-vg-sda/ceph-journal-sdo
  delta: '0:00:28.264136'
  end: '2020-06-09 14:43:17.461052'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdo
    data_vg: ceph-hdd-vg-sdo
    db: ceph-journal-sdo
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:42:49.196916'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdo/ceph-hdd-lv-sdo
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph2.example.com] (item={'data': 'ceph-hdd-lv-sds', 'data_vg': 'ceph-hdd-vg-sds', 'db': 'ceph-journal-sds', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sds/ceph-hdd-lv-sds
  - ceph-ssd-vg-sdb/ceph-journal-sds
  delta: '0:00:22.284079'
  end: '2020-06-09 14:43:34.806467'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sds
    data_vg: ceph-hdd-vg-sds
    db: ceph-journal-sds
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:43:12.522388'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sds/ceph-hdd-lv-sds: not a block device
     stderr: blkid: error: ceph-hdd-vg-sds/ceph-hdd-lv-sds: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sds/ceph-hdd-lv-sds
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdp', 'data_vg': 'ceph-hdd-vg-sdp', 'db': 'ceph-journal-sdp', 'db_vg': 'ceph-ssd-vg-sda', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  - ceph-ssd-vg-sda/ceph-journal-sdp
  delta: '0:00:28.192873'
  end: '2020-06-09 14:43:46.098137'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdp
    data_vg: ceph-hdd-vg-sdp
    db: ceph-journal-sdp
    db_vg: ceph-ssd-vg-sda
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:43:17.905264'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdp/ceph-hdd-lv-sdp
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdq', 'data_vg': 'ceph-hdd-vg-sdq', 'db': 'ceph-journal-sdq', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  - ceph-ssd-vg-sdb/ceph-journal-sdq
  delta: '0:00:28.328848'
  end: '2020-06-09 14:44:14.901129'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdq
    data_vg: ceph-hdd-vg-sdq
    db: ceph-journal-sdq
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:43:46.572281'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdq/ceph-hdd-lv-sdq
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sdr', 'data_vg': 'ceph-hdd-vg-sdr', 'db': 'ceph-journal-sdr', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  - ceph-ssd-vg-sdb/ceph-journal-sdr
  delta: '0:00:28.372492'
  end: '2020-06-09 14:44:43.726697'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sdr
    data_vg: ceph-hdd-vg-sdr
    db: ceph-journal-sdr
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:44:15.354205'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: not a block device
     stderr: blkid: error: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sdr/ceph-hdd-lv-sdr
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
failed: [ceph1.example.com] (item={'data': 'ceph-hdd-lv-sds', 'data_vg': 'ceph-hdd-vg-sds', 'db': 'ceph-journal-sds', 'db_vg': 'ceph-ssd-vg-sdb', 'crush_device_class': 'hdd'}) => changed=true
  ansible_loop_var: item
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.example.com:5000/rhceph/rhceph-4-rhel8:latest
  - lvm
  - zap
  - --destroy
  - ceph-hdd-vg-sds/ceph-hdd-lv-sds
  - ceph-ssd-vg-sdb/ceph-journal-sds
  delta: '0:00:28.498808'
  end: '2020-06-09 14:45:12.676585'
  item:
    crush_device_class: hdd
    data: ceph-hdd-lv-sds
    data_vg: ceph-hdd-vg-sds
    db: ceph-journal-sds
    db_vg: ceph-ssd-vg-sdb
  msg: non-zero return code
  rc: 2
  start: '2020-06-09 14:44:44.177777'
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
     stderr: lsblk: ceph-hdd-vg-sds/ceph-hdd-lv-sds: not a block device
     stderr: blkid: error: ceph-hdd-vg-sds/ceph-hdd-lv-sds: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm zap [-h] [--destroy] [--osd-id OSD_ID]
                               [--osd-fsid OSD_FSID]
                               [DEVICES [DEVICES ...]]
    ceph-volume lvm zap: error: Unable to proceed with non-existing device: ceph-hdd-vg-sds/ceph-hdd-lv-sds
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

PLAY RECAP **********************************************************************************************************************************************************************************************************************************
ceph1.example.com           : ok=34   changed=3    unreachable=0    failed=1    skipped=37   rescued=0    ignored=2
ceph2.example.com           : ok=31   changed=1    unreachable=0    failed=1    skipped=33   rescued=0    ignored=2
ceph3.example.com           : ok=31   changed=1    unreachable=0    failed=1    skipped=33   rescued=0    ignored=3
localhost                  : ok=0    changed=0    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0


Tuesday 09 June 2020  14:45:11 -0400 (0:08:30.822)       0:09:17.859 **********
===============================================================================
zap and destroy osds created by ceph-volume with lvm_volumes ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- 510.82s
gather monitors facts --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 21.71s
ceph-facts : find a running mon container -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.16s
ceph-facts : set_fact _monitor_address to monitor_interface - ipv4 ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.15s
disable ceph mgr service ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.09s
remove ceph mgr service -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.91s
ceph-facts : check if it is atomic host ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.67s
get all the running osds ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.67s
ceph-facts : check if podman binary is present --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.66s
ceph-facts : is ceph running already? ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.65s
ceph-facts : generate cluster fsid --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.65s
remove osd mountpoint tree ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.62s
ceph-facts : check if the ceph conf exists ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.58s
ceph-facts : set_fact devices generate device list when osd_auto_discovery ----------------------------------------------------------------------------------------------------------------------------------------------------------- 0.50s
ceph-facts : create a local fetch directory if it does not exist --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.46s
ceph-facts : set_fact container_exec_cmd --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.42s
ceph-facts : include facts.yml ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.42s
ceph-facts : set_fact _monitor_address to monitor_address_block ipv4 ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.40s
ceph-facts : set_fact _monitor_address to monitor_address_block ipv6 ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.39s
ceph-facts : set_fact _monitor_address to monitor_address ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.39s


Expected results:

ignoring 

Additional info: we are using lvm_volumes vs devices - also this is a tiered storage situation

Comment 9 errata-xmlrpc 2020-07-20 14:21:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3003


Note You need to log in before you can comment on or make changes to this bug.