Bug 1654011 - ceph-disk+bluestore udev fails to inform partition causing playbook to fail
Summary: ceph-disk+bluestore udev fails to inform partition causing playbook to fail
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Disk
Version: 3.2
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 3.3
Assignee: Kefu Chai
QA Contact: Vasu Kulkarni
Aron Gunn
URL:
Whiteboard:
: 1657183 (view as bug list)
Depends On:
Blocks: 1648010
TreeView+ depends on / blocked
 
Reported: 2018-11-27 20:19 UTC by Vasu Kulkarni
Modified: 2019-04-23 19:11 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
.A race condition causes `ceph-disk` to fail when running an Ansible playbook In some cases, `udev` fails to activate block devices in time for OSD activation, causing the OSD to fail when starting up. To work around this issue, use the `ceph-volume lvm` command instead of the deprecated `ceph-disk` command. By using the `ceph-volume lvm` command, the OSDs start consistently on reboot.
Clone Of: 1648010
Environment:
Last Closed: 2019-04-23 15:08:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-container pull 1256 0 None None None 2018-11-27 20:19:29 UTC

Comment 8 Josh Durgin 2018-12-12 15:49:21 UTC
*** Bug 1657183 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.