Bug 1683891 - OSD crashed when executing task use ceph-volume lvm batch to create bluestore osds
Summary: OSD crashed when executing task use ceph-volume lvm batch to create bluestore...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS
Version: 4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 4.0
Assignee: Neha Ojha
QA Contact: Manohar Murthy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-28 03:44 UTC by acalhoun
Modified: 2020-01-31 12:46 UTC (History)
11 users (show)

Fixed In Version: ceph-14.2.1-281.g8cd9d59.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-31 12:45:38 UTC
Target Upstream Version:


Attachments (Terms of Use)
ceph osd 0 log (34.68 KB, text/plain)
2019-02-28 03:44 UTC, acalhoun
no flags Details
ceph-ansible output file (2.78 MB, text/plain)
2019-02-28 03:45 UTC, acalhoun
no flags Details
ceph-ansible output file for ceph v 2:14.2.0-142.g2f9c072.el8cp (1.67 MB, text/plain)
2019-04-16 18:09 UTC, acalhoun
no flags Details
ceph osd 3 log for ceph v 2:14.2.0-142.g2f9c072.el8cp (101.52 KB, text/plain)
2019-04-16 18:10 UTC, acalhoun
no flags Details
ceph volume log for ceph v 2:14.2.0-142.g2f9c072.el8cp (118.87 KB, text/plain)
2019-04-16 18:10 UTC, acalhoun
no flags Details
jbrier ansible.log (2.42 MB, text/plain)
2019-05-31 19:58 UTC, John Brier
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 38329 0 None None None 2019-02-28 16:49:31 UTC
Ceph Project Bug Tracker 39334 0 None None None 2019-04-16 19:38:27 UTC
Github ceph ceph pull 26698 0 'None' closed common/str_map: fix trim() on empty string 2021-02-03 05:39:17 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:46:06 UTC

Description acalhoun 2019-02-28 03:44:51 UTC
Created attachment 1539340 [details]
ceph osd 0 log

Description of problem:
Upon Ceph-ansible running the task "use ceph-volume lvm batch to create bluestore osds" the OSD crashes. 


Version-Release number of selected component (if applicable):
14.1.0-123.ge4b45b9.el8

How reproducible:
always

Steps to Reproduce:
1. ansible-playbook site.yml
2.
3.

Actual results:
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - /dev/sdb
  - /dev/sdc
  - /dev/sdd
  - /dev/sde
  - /dev/sdf
  - /dev/sdg
  - /dev/sdh

    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__
        self.main(self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main
        self.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute
        self.strategy.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute
        Create(command).main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main
        self.create(args)RHEL 8 + RHCS 4.0 Notes
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
        prepare_step.safe_prepare(args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare
        self.prepare()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare
        osd_fsid,
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
        db=db
      File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore
        raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
    RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c7538d56-417c-4ffe-9e5e-33d24310f168 --setuser ceph --setgroup ceph

Expected results:
successful installation of all OSDS

Additional info:
attached ceph-ansible output and ceph-osd.0.log

Comment 1 acalhoun 2019-02-28 03:45:27 UTC
Created attachment 1539341 [details]
ceph-ansible output file

Comment 2 acalhoun 2019-04-16 14:50:23 UTC
Observing similar issue occurring in Ceph 14.2.0-142.g2f9c072.el8cp. 

repo used: 

# Note: also requires RHEL 8 nightly BaseOS and Appstream repos

[ceph-4.0-rhel-8]
name = ceph-4.0-rhel-8
baseurl = http://file.rdu.redhat.com/~kdreyer/scratch/ceph-4.0-rhel-8
enabled = 1
gpgcheck = 0

[ceph-14.2.0-142.g2f9c072.el8]
name = ceph-14.2.0-142.g2f9c072.el8
baseurl = http://download.eng.bos.redhat.com/rcm-guest/ceph-drops/rhceph-4.0-scratch/ceph-14.2.0-142.g2f9c072.el8/
enabled = 1
gpgcheck = 0

[ansible-2.7-rhel-8]
name = ansible-2.7-rhel-8
baseurl = http://download.devel.redhat.com/nightly/rhel-8/ANSIBLE/latest-ANSIBLE-2.7-RHEL-8/compose/Base/$basearch/os/
enabled = 1
gpgcheck = 0

Ceph-ansible error message below. 

fatal: [e23-h05-740xd.alias.bos.scalelab.redhat.com]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --osds-per-device
  - '2'
  - /dev/nvme0n1
  - /dev/nvme1n1
  - /dev/nvme2n1
  - /dev/nvme3n1
  - /dev/nvme4n1
  delta: '0:00:09.730642'
  end: '2019-04-15 20:29:48.344852'
  msg: non-zero return code
  rc: 1
  start: '2019-04-15 20:29:38.614210'
  stderr: |-
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__
        self.main(self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main
        self.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute
        self.strategy.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute
        Create(command).main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main
        self.create(args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
        prepare_step.safe_prepare(args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare
        self.prepare()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare
        osd_fsid,
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
        db=db
      File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore
        raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
    RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 176a7dfb-118c-4512-973f-4dfee0f3f133 --setuser ceph --setgroup ceph
  stderr_lines:
  - 'Traceback (most recent call last):'
  - '  File "/sbin/ceph-volume", line 11, in <module>'
  - '    load_entry_point(''ceph-volume==1.0.0'', ''console_scripts'', ''ceph-volume'')()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__'
  - '    self.main(self.argv)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc'
  - '    return f(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main'
  - '    terminal.dispatch(self.mapper, subcommand_args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch'
  - '    instance.main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main'
  - '    terminal.dispatch(self.mapper, self.argv)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch'
  - '    instance.main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main'
  - '    self.execute()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute'
  - '    self.strategy.execute()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute'
  - '    Create(command).main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main'
  - '    self.create(args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create'
  - '    prepare_step.safe_prepare(args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare'
  - '    self.prepare()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare'
  - '    osd_fsid,'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore'
  - '    db=db'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore'
  - '    raise RuntimeError(''Command failed with exit code %s: %s'' % (returncode, '' ''.join(command)))'
  - 'RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 176a7dfb-118c-4512-973f-4dfee0f3f133 --setuser ceph --setgroup ceph'
  stdout: |-
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c /dev/nvme0n1
     stdout: Physical volume "/dev/nvme0n1" successfully created.
     stdout: Volume group "ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-46c8b636-066c-40a2-8385-ee72f3374e35 /dev/nvme1n1
     stdout: Physical volume "/dev/nvme1n1" successfully created.
     stdout: Volume group "ceph-46c8b636-066c-40a2-8385-ee72f3374e35" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-5bae316d-204a-4999-b89b-e61b5e139003 /dev/nvme2n1
     stdout: Physical volume "/dev/nvme2n1" successfully created.
     stdout: Volume group "ceph-5bae316d-204a-4999-b89b-e61b5e139003" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-f012746d-ab81-47ef-ad24-3ecdc9b7b988 /dev/nvme3n1
     stdout: Physical volume "/dev/nvme3n1" successfully created.
     stdout: Volume group "ceph-f012746d-ab81-47ef-ad24-3ecdc9b7b988" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-97cf8ed5-3111-43fc-bc21-ed862cb865b7 /dev/nvme4n1
     stdout: Physical volume "/dev/nvme4n1" successfully created.
     stdout: Volume group "ceph-97cf8ed5-3111-43fc-bc21-ed862cb865b7" successfully created
    Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-e9c294fb-40d2-491a-ab2c-1006bb2c015e ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c
     stdout: Logical volume "osd-data-e9c294fb-40d2-491a-ab2c-1006bb2c015e" created.
    Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-058f52d8-9792-4724-b47c-7634a6fe505d ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c
     stdout: Logical volume "osd-data-058f52d8-9792-4724-b47c-7634a6fe505d" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 176a7dfb-118c-4512-973f-4dfee0f3f133
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
    Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-0
    Running command: /bin/chown -h ceph:ceph /dev/ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c/osd-data-e9c294fb-40d2-491a-ab2c-1006bb2c015e
    Running command: /bin/chown -R ceph:ceph /dev/dm-3
    Running command: /bin/ln -s /dev/ceph-df79fafb-3423-418e-9dd4-203ad89a3c8c/osd-data-e9c294fb-40d2-491a-ab2c-1006bb2c015e /var/lib/ceph/osd/ceph-0/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQC16bRcEqMlARAAhEP2GJRmYxldeg1JGvKdhQ==
     stdout: creating /var/lib/ceph/osd/ceph-0/keyring
    added entity osd.0 auth(key=AQC16bRcEqMlARAAhEP2GJRmYxldeg1JGvKdhQ==)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 176a7dfb-118c-4512-973f-4dfee0f3f133 --setuser ceph --setgroup ceph
     stdout: /usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = long unsigned int; _Alloc = mempool::pool_allocator<(mempool::pool_index_t)1, long unsigned int>; std::vector<_Tp, _Alloc>::reference = long unsigned int&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
     stderr: 2019-04-15 20:29:47.207 7fc7a7ec7080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
     stderr: *** Caught signal (Aborted) **
     stderr: in thread 7fc7a7ec7080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fc7a4a58d80]
     stderr: 2: (gsignal()+0x10f) [0x7fc7a373393f]
     stderr: 3: (abort()+0x127) [0x7fc7a371dc95]
     stderr: 4: (()+0x65ca48) [0x5597e21cea48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5597e27cca87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5597e2676ae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5597e26985b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5597e26cb9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x5597e26db64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x5597e21eed7e]
     stderr: 11: (main()+0x1bd1) [0x5597e20e70c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fc7a371f813]
     stderr: 13: (_start()+0x2e) [0x5597e21cd2fe]
     stderr: 2019-04-15 20:29:47.728 7fc7a7ec7080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fc7a7ec7080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fc7a4a58d80]
     stderr: 2: (gsignal()+0x10f) [0x7fc7a373393f]
     stderr: 3: (abort()+0x127) [0x7fc7a371dc95]
     stderr: 4: (()+0x65ca48) [0x5597e21cea48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5597e27cca87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5597e2676ae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5597e26985b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5597e26cb9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x5597e26db64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x5597e21eed7e]
     stderr: 11: (main()+0x1bd1) [0x5597e20e70c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fc7a371f813]
     stderr: 13: (_start()+0x2e) [0x5597e21cd2fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -387> 2019-04-15 20:29:47.207 7fc7a7ec7080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
     stderr: 0> 2019-04-15 20:29:47.728 7fc7a7ec7080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fc7a7ec7080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fc7a4a58d80]
     stderr: 2: (gsignal()+0x10f) [0x7fc7a373393f]
     stderr: 3: (abort()+0x127) [0x7fc7a371dc95]
     stderr: 4: (()+0x65ca48) [0x5597e21cea48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5597e27cca87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5597e2676ae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5597e26985b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5597e26cb9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x5597e26db64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x5597e21eed7e]
     stderr: 11: (main()+0x1bd1) [0x5597e20e70c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fc7a371f813]
     stderr: 13: (_start()+0x2e) [0x5597e21cd2fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -387> 2019-04-15 20:29:47.207 7fc7a7ec7080 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
     stderr: 0> 2019-04-15 20:29:47.728 7fc7a7ec7080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fc7a7ec7080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fc7a4a58d80]
     stderr: 2: (gsignal()+0x10f) [0x7fc7a373393f]
     stderr: 3: (abort()+0x127) [0x7fc7a371dc95]
     stderr: 4: (()+0x65ca48) [0x5597e21cea48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x5597e27cca87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x5597e2676ae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x5597e26985b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x5597e26cb9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x5597e26db64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x5597e21eed7e]
     stderr: 11: (main()+0x1bd1) [0x5597e20e70c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fc7a371f813]
     stderr: 13: (_start()+0x2e) [0x5597e21cd2fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
    --> Was unable to complete a new OSD, will rollback changes
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
     stderr: purged osd.0
  stdout_lines: <omitted>
fatal: [e23-h07-740xd.alias.bos.scalelab.redhat.com]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --osds-per-device
  - '2'
  - /dev/nvme0n1
  - /dev/nvme1n1
  - /dev/nvme2n1
  - /dev/nvme3n1
  - /dev/nvme4n1
  delta: '0:00:09.712007'
  end: '2019-04-15 20:29:48.428104'
  msg: non-zero return code
  rc: 1
  start: '2019-04-15 20:29:38.716097'
  stderr: |-
    Traceback (most recent call last):
      File "/sbin/ceph-volume", line 11, in <module>
        load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__
        self.main(self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch
        instance.main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main
        self.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute
        self.strategy.execute()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute
        Create(command).main()
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main
        self.create(args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
        prepare_step.safe_prepare(args)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare
        self.prepare()
      File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare
        osd_fsid,
      File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
        db=db
      File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore
        raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
    RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 0709649d-fcbf-463b-ab1a-3b425f0647ce --setuser ceph --setgroup ceph
  stderr_lines:
  - 'Traceback (most recent call last):'
  - '  File "/sbin/ceph-volume", line 11, in <module>'
  - '    load_entry_point(''ceph-volume==1.0.0'', ''console_scripts'', ''ceph-volume'')()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 38, in __init__'
  - '    self.main(self.argv)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc'
  - '    return f(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 148, in main'
  - '    terminal.dispatch(self.mapper, subcommand_args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch'
  - '    instance.main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 40, in main'
  - '    terminal.dispatch(self.mapper, self.argv)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 182, in dispatch'
  - '    instance.main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 325, in main'
  - '    self.execute()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 288, in execute'
  - '    self.strategy.execute()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py", line 124, in execute'
  - '    Create(command).main()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 69, in main'
  - '    self.create(args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create'
  - '    prepare_step.safe_prepare(args)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 219, in safe_prepare'
  - '    self.prepare()'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root'
  - '    return func(*a, **kw)'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 320, in prepare'
  - '    osd_fsid,'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore'
  - '    db=db'
  - '  File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 430, in osd_mkfs_bluestore'
  - '    raise RuntimeError(''Command failed with exit code %s: %s'' % (returncode, '' ''.join(command)))'
  - 'RuntimeError: Command failed with exit code 250: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 0709649d-fcbf-463b-ab1a-3b425f0647ce --setuser ceph --setgroup ceph'
  stdout: |-
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f /dev/nvme0n1
     stdout: Physical volume "/dev/nvme0n1" successfully created.
     stdout: Volume group "ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-219812d7-b6f2-4787-aea1-5e18ced21e45 /dev/nvme1n1
     stdout: Physical volume "/dev/nvme1n1" successfully created.
     stdout: Volume group "ceph-219812d7-b6f2-4787-aea1-5e18ced21e45" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-f242a35c-2588-4ee9-8e98-7a615e6ff69b /dev/nvme2n1
     stdout: Physical volume "/dev/nvme2n1" successfully created.
     stdout: Volume group "ceph-f242a35c-2588-4ee9-8e98-7a615e6ff69b" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-7dddbd61-d3b3-43c3-bf44-207a9d7ca02b /dev/nvme3n1
     stdout: Physical volume "/dev/nvme3n1" successfully created.
     stdout: Volume group "ceph-7dddbd61-d3b3-43c3-bf44-207a9d7ca02b" successfully created
    Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-ea490de0-29e1-4f6d-8f31-0b8a74d80bfa /dev/nvme4n1
     stdout: Physical volume "/dev/nvme4n1" successfully created.
     stdout: Volume group "ceph-ea490de0-29e1-4f6d-8f31-0b8a74d80bfa" successfully created
    Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-47050f44-364b-4e98-9b1c-63cefefa5f34 ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f
     stdout: Logical volume "osd-data-47050f44-364b-4e98-9b1c-63cefefa5f34" created.
    Running command: /usr/sbin/lvcreate --yes -l 372 -n osd-data-160c96dc-77c5-4393-ac2d-7fb67d5b3474 ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f
     stdout: Logical volume "osd-data-160c96dc-77c5-4393-ac2d-7fb67d5b3474" created.
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0709649d-fcbf-463b-ab1a-3b425f0647ce
    Running command: /bin/ceph-authtool --gen-print-key
    Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
    Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-3
    Running command: /bin/chown -h ceph:ceph /dev/ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f/osd-data-47050f44-364b-4e98-9b1c-63cefefa5f34
    Running command: /bin/chown -R ceph:ceph /dev/dm-3
    Running command: /bin/ln -s /dev/ceph-7b6a9073-b1aa-4c2d-a858-409baa41a82f/osd-data-47050f44-364b-4e98-9b1c-63cefefa5f34 /var/lib/ceph/osd/ceph-3/block
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
     stderr: got monmap epoch 1
    Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQC16bRcoEBEBhAAhah0ZtZYX3JLFQ4pm2s1kQ==
     stdout: creating /var/lib/ceph/osd/ceph-3/keyring
    added entity osd.3 auth(key=AQC16bRcoEBEBhAAhah0ZtZYX3JLFQ4pm2s1kQ==)
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
    Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
    Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 0709649d-fcbf-463b-ab1a-3b425f0647ce --setuser ceph --setgroup ceph
     stdout: /usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = long unsigned int; _Alloc = mempool::pool_allocator<(mempool::pool_index_t)1, long unsigned int>; std::vector<_Tp, _Alloc>::reference = long unsigned int&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
     stderr: 2019-04-15 20:29:47.325 7fa51397b080 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
     stderr: *** Caught signal (Aborted) **
     stderr: in thread 7fa51397b080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fa51050cd80]
     stderr: 2: (gsignal()+0x10f) [0x7fa50f1e793f]
     stderr: 3: (abort()+0x127) [0x7fa50f1d1c95]
     stderr: 4: (()+0x65ca48) [0x55cf68372a48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x55cf68970a87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x55cf6881aae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x55cf6883c5b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x55cf6886f9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x55cf6887f64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55cf68392d7e]
     stderr: 11: (main()+0x1bd1) [0x55cf6828b0c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fa50f1d3813]
     stderr: 13: (_start()+0x2e) [0x55cf683712fe]
     stderr: 2019-04-15 20:29:47.845 7fa51397b080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fa51397b080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fa51050cd80]
     stderr: 2: (gsignal()+0x10f) [0x7fa50f1e793f]
     stderr: 3: (abort()+0x127) [0x7fa50f1d1c95]
     stderr: 4: (()+0x65ca48) [0x55cf68372a48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x55cf68970a87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x55cf6881aae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x55cf6883c5b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x55cf6886f9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x55cf6887f64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55cf68392d7e]
     stderr: 11: (main()+0x1bd1) [0x55cf6828b0c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fa50f1d3813]
     stderr: 13: (_start()+0x2e) [0x55cf683712fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -387> 2019-04-15 20:29:47.325 7fa51397b080 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
     stderr: 0> 2019-04-15 20:29:47.845 7fa51397b080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fa51397b080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fa51050cd80]
     stderr: 2: (gsignal()+0x10f) [0x7fa50f1e793f]
     stderr: 3: (abort()+0x127) [0x7fa50f1d1c95]
     stderr: 4: (()+0x65ca48) [0x55cf68372a48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x55cf68970a87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x55cf6881aae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x55cf6883c5b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x55cf6886f9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x55cf6887f64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55cf68392d7e]
     stderr: 11: (main()+0x1bd1) [0x55cf6828b0c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fa50f1d3813]
     stderr: 13: (_start()+0x2e) [0x55cf683712fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
     stderr: -387> 2019-04-15 20:29:47.325 7fa51397b080 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
     stderr: 0> 2019-04-15 20:29:47.845 7fa51397b080 -1 *** Caught signal (Aborted) **
     stderr: in thread 7fa51397b080 thread_name:ceph-osd
     stderr: ceph version 14.2.0-142-g2f9c072 (2f9c0720b5aed4c9e25e8b050e71856df0a986ad) nautilus (stable)
     stderr: 1: (()+0x12d80) [0x7fa51050cd80]
     stderr: 2: (gsignal()+0x10f) [0x7fa50f1e793f]
     stderr: 3: (abort()+0x127) [0x7fa50f1d1c95]
     stderr: 4: (()+0x65ca48) [0x55cf68372a48]
     stderr: 5: (BitmapAllocator::init_add_free(unsigned long, unsigned long)+0x857) [0x55cf68970a87]
     stderr: 6: (BlueStore::_open_alloc()+0x193) [0x55cf6881aae3]
     stderr: 7: (BlueStore::_open_db_and_around(bool)+0xa6) [0x55cf6883c5b6]
     stderr: 8: (BlueStore::_fsck(bool, bool)+0x587) [0x55cf6886f9d7]
     stderr: 9: (BlueStore::mkfs()+0x141f) [0x55cf6887f64f]
     stderr: 10: (OSD::mkfs(CephContext*, ObjectStore*, uuid_d, int)+0x1ae) [0x55cf68392d7e]
     stderr: 11: (main()+0x1bd1) [0x55cf6828b0c1]
     stderr: 12: (__libc_start_main()+0xf3) [0x7fa50f1d3813]
     stderr: 13: (_start()+0x2e) [0x55cf683712fe]
     stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
    --> Was unable to complete a new OSD, will rollback changes
    Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.3 --yes-i-really-mean-it
     stderr: purged osd.3
  stdout_lines: <omitted>

Comment 3 acalhoun 2019-04-16 18:09:38 UTC
Created attachment 1555627 [details]
ceph-ansible output file for ceph v 2:14.2.0-142.g2f9c072.el8cp

Comment 4 acalhoun 2019-04-16 18:10:16 UTC
Created attachment 1555628 [details]
ceph osd 3 log for ceph v 2:14.2.0-142.g2f9c072.el8cp

Comment 5 acalhoun 2019-04-16 18:10:56 UTC
Created attachment 1555629 [details]
ceph volume log for ceph v 2:14.2.0-142.g2f9c072.el8cp

Comment 6 Neha Ojha 2019-04-16 19:38:28 UTC
I have created https://tracker.ceph.com/issues/39334 to track this issue upstream. 
Alex, let's use the tracker as a forum to get this issue figured out. Could you please provide the details asked for in https://tracker.ceph.com/issues/39334#note-1?

Comment 9 John Brier 2019-05-31 19:58:57 UTC
Created attachment 1575827 [details]
jbrier ansible.log

attaching the ansible.log so it's not just on my webspace.

Comment 11 Ben England 2019-06-03 22:33:43 UTC
Today I got farther with ceph-ansible, will provide a doc showing how I tested it on a single node.  It got through the install and ran rados bench but failed to mount cephfs.   Will file separate bz on that.

Comment 12 Ben England 2019-06-03 22:56:59 UTC
Actually even Cephfs worked, just had to set size & min_size to 1.    So I guess this is fixed.

Comment 13 Giridhar Ramaraju 2019-08-05 13:09:16 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 14 Giridhar Ramaraju 2019-08-05 13:10:35 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 20 errata-xmlrpc 2020-01-31 12:45:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.