Bug 1855142 - [Ceph-Ansible][Ceph-containers] "--limit rgws" ceph ansible instillation has been failing at "ceph-volume lvm batch to create bluestore osds" RuntimeError: 1 devices were filtered in non-interactive mode, bailing out
Summary: [Ceph-Ansible][Ceph-containers] "--limit rgws" ceph ansible instillation has...
Keywords:
Status: CLOSED DUPLICATE of bug 1827349
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: z2
: 4.1
Assignee: Christina Meno
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-09 04:48 UTC by ravic
Modified: 2020-07-21 16:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-21 16:11:45 UTC
Embargoed:


Attachments (Terms of Use)
Installer Logs (242.87 KB, application/zip)
2020-07-09 04:48 UTC, ravic
no flags Details

Description ravic 2020-07-09 04:48:26 UTC
Created attachment 1700382 [details]
Installer Logs

Description of problem:

"--limit rgws" ceph ansible instillation has been failing at "ceph-volume lvm batch to create bluestore osds" task

Version-Release number of selected component (if applicable):

Ansible Machine

NAME="Red Hat Enterprise Linux"
VERSION="8.2 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.2"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.2 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.2:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.2"


[rhel@admin ceph-ansible]$ uname -a
Linux admin.ceph.local 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Mon Jun 1 20:24:55 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux


[rhel@admin ceph-ansible]$ ansible-playbook --version
ansible-playbook 2.8.12
  config file = /usr/share/ceph-ansible/ansible.cfg
  configured module search path = ['/usr/share/ceph-ansible/library']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 3.6.8 (default, Dec  5 2019, 15:45:45) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]

  [rhel@admin ceph-ansible]$ podman version
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.13.4
OS/Arch:            linux/amd64
[rhel@admin ceph-ansible]$



Cluster Machine

NAME="Red Hat Enterprise Linux"
VERSION="8.2 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.2"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.2 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.2:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.2"



[rhel@osd1 ~]$ uname -a
Linux osd1.ceph.local 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Mon Jun 1 20:24:55 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux


[root@mon1 ~]# podman exec ceph-mon-mon1 ceph -v
ceph version 14.2.8-59.el8cp (53387608e81e6aa2487c952a604db06faa5b2cd0) nautilus (stable)




How reproducible:


Step1 
Installed 3 Mon and 4 OSD nodes with the 23 SAS3 spinning drives( 12TB) drives and 1 Intel Optane NVMe for the MetaData

ok: [mon1 -> mon1] => 
  msg:
  - '  cluster:'
  - '    id:     8182b4ca-ecce-49e8-a98d-c430904eaf7a'
  - '    health: HEALTH_OK'
  - ' '
  - '  services:'
  - '    mon: 3 daemons, quorum mon1,mon2,mon3 (age 11m)'
  - '    mgr: mon3(active, since 24s), standbys: mon1, mon2'
  - '    osd: 92 osds: 92 up (since 3m), 92 in (since 3m)'
  - ' '
  - '  data:'
  - '    pools:   0 pools, 0 pgs'
  - '    objects: 0 objects, 0 B'
  - '    usage:   94 GiB used, 1004 TiB / 1004 TiB avail'
  - '    pgs:     '
  - ' '

PLAY RECAP **************************************************************************************************************************************************
admin                      : ok=141  changed=6    unreachable=0    failed=0    skipped=242  rescued=0    ignored=0   
mon1                       : ok=280  changed=26   unreachable=0    failed=0    skipped=413  rescued=0    ignored=0   
mon2                       : ok=231  changed=17   unreachable=0    failed=0    skipped=367  rescued=0    ignored=0   
mon3                       : ok=239  changed=19   unreachable=0    failed=0    skipped=366  rescued=0    ignored=0   
osd1                       : ok=173  changed=17   unreachable=0    failed=0    skipped=269  rescued=0    ignored=0   
osd2                       : ok=162  changed=15   unreachable=0    failed=0    skipped=261  rescued=0    ignored=0   
osd3                       : ok=162  changed=15   unreachable=0    failed=0    skipped=261  rescued=0    ignored=0   
osd4                       : ok=164  changed=15   unreachable=0    failed=0    skipped=259  rescued=0    ignored=0   


INSTALLER STATUS ********************************************************************************************************************************************
Install Ceph Monitor           : Complete (0:01:01)
Install Ceph Manager           : Complete (0:00:57)
Install Ceph OSD               : Complete (0:06:58)
Install Ceph Dashboard         : Complete (0:01:04)
Install Ceph Grafana           : Complete (0:00:31)
Install Ceph Node Exporter     : Complete (0:01:15)


Step 2 

Added 3 Ceph clients with the --limit clinets option

PLAY RECAP **************************************************************************************************************************************************
client1                    : ok=136  changed=8    unreachable=0    failed=0    skipped=274  rescued=0    ignored=0   
client2                    : ok=105  changed=4    unreachable=0    failed=0    skipped=226  rescued=0    ignored=0   
client3                    : ok=105  changed=4    unreachable=0    failed=0    skipped=226  rescued=0    ignored=0   


INSTALLER STATUS ********************************************************************************************************************************************
Install Ceph Client            : Complete (0:00:29)
Install Ceph Node Exporter     : Complete (0:00:31)

Step3 

Added RGWS on all 4 OSD nodes with the --limit rgws option, failed with the following error for each OSD node @eph-volume lvm batch to create bluestore osds



Actual results:

TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] ****************************************************************************************
Wednesday 08 July 2020  14:40:13 -0400 (0:00:00.899)       0:05:08.220 ******** 

fatal: [osd2]: FAILED! => changed=true 
  cmd:
  - podman
  - run
  - --rm
  - --privileged
  - --net=host
  - --ipc=host
  - --ulimit
  - nofile=1024:4096
  - -v
  - /run/lock/lvm:/run/lock/lvm:z
  - -v
  - /var/run/udev/:/var/run/udev/:z
  - -v
  - /dev:/dev
  - -v
  - /etc/ceph:/etc/ceph:z
  - -v
  - /run/lvm/:/run/lvm/
  - -v
  - /var/lib/ceph/:/var/lib/ceph/:z
  - -v
  - /var/log/ceph/:/var/log/ceph/:z
  - --entrypoint=ceph-volume
  - registry.redhat.io/rhceph/rhceph-4-rhel8:latest
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --prepare
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
  - /dev/sdh
  - /dev/sdi
  - /dev/sdj
  - /dev/sdk
  - /dev/sdl
  - /dev/sdm
  - /dev/sdn
  - /dev/sdo
  - /dev/sdp
  - /dev/sdq
  - /dev/sdr
  - /dev/sds
  - /dev/sdt
  - /dev/sdu
  - /dev/sdv
  - /dev/sdw
  - /dev/sdx
  - /dev/sdy
  - --wal-devices
  - /dev/nvme0n1
  - --report
  - --format=json
  msg: non-zero return code
  rc: 1
  stderr: |-
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    WARNING: The same type, major and minor should not be used for multiple devices.
    -->  RuntimeError: 1 devices were filtered in non-interactive mode, bailing out
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>


Expected results:


should able to run the "--limit rgws" to add rgw instances to the cluster on OSD nodes after successful initial Ceph cluster deployment

Additional info:

Comment 1 Guillaume Abrioux 2020-07-09 13:39:31 UTC
that doesn't seem to be a ceph-ansible bug, looks like a duplicate of bz1854326

I'm moving this bz to ceph-volume component.

Comment 2 Vasishta 2020-07-21 16:11:45 UTC
I think this is duplicate of BZ 1827349
Closing this BZ for the same.

Please let us know if there are any concerns.

Regards,
Vasishta Shastry
QE, Ceph

*** This bug has been marked as a duplicate of bug 1827349 ***


Note You need to log in before you can comment on or make changes to this bug.