Created attachment 1552681[details]
ceph-volume.log
As per a report from a consultant deploying OSP13/RHCS3.2 with bluestore, ceph-volume batch, as executed in a container by ceph-ansible, encountered the symptoms of bug 1676612. See attached cinder log (and snippet [2]).
The fixed-in of bug 1676612, lvm2-2.02.184-1.el7, is not yet in the latest ceph-container rhceph/rhceph-3-rhel7 [1] at this time, 3-23.
Could a new rhceph/rhceph-3-rhel7 container be released containing lvm2-2.02.184-1.el7 or newer?
John
[1] https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/rhceph/rhceph-3-rhel7
[2]
[root@overcloud-computehci-0 heat-admin]# cat /var/log/ceph/ceph-volume.log
[2019-04-04 19:18:33,932][ceph_volume.main][INFO ] Running command: ceph-volume --cluster ceph lvm batch --bluestore --yes --prepare /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/nvme0n1 /dev/nvme1n1 --report --format=json
[2019-04-04 19:18:33,949][ceph_volume.process][INFO ] Running command: /usr/sbin/lvs --noheadings --readonly --separator=";" -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2019-04-04 19:26:46,782][ceph_volume.process][INFO ] stderr WARNING: Device /dev/nvme1n1 not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO ] stderr WARNING: Device /dev/sda not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO ] stderr WARNING: Device /dev/sdq not initialized in udev database even after waiting 10000000 microseconds.
[2019-04-04 19:26:46,782][ceph_volume.process][INFO ] stderr WARNING: Device /dev/nvme0n1 not initialized in udev database even after waiting 10000000 microseconds.