Bug 1654011
Summary: | ceph-disk+bluestore udev fails to inform partition causing playbook to fail | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasu Kulkarni <vakulkar> |
Component: | Ceph-Disk | Assignee: | Kefu Chai <kchai> |
Status: | CLOSED WONTFIX | QA Contact: | Vasu Kulkarni <vakulkar> |
Severity: | urgent | Docs Contact: | Aron Gunn <agunn> |
Priority: | urgent | ||
Version: | 3.2 | CC: | adeza, agunn, anharris, ceph-eng-bugs, evelu, flucifre, gabrioux, hnallurv, jbrier, jdurgin, kdreyer, pasik, prsurve, seb, shan, vakulkar, vashastr |
Target Milestone: | rc | Keywords: | Automation, AutomationBlocker |
Target Release: | 3.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
.A race condition causes `ceph-disk` to fail when running an Ansible playbook
In some cases, `udev` fails to activate block devices in time for OSD activation, causing the OSD to fail when starting up.
To work around this issue, use the `ceph-volume lvm` command instead of the deprecated `ceph-disk` command. By using the `ceph-volume lvm` command, the OSDs start consistently on reboot.
|
Story Points: | --- |
Clone Of: | 1648010 | Environment: | |
Last Closed: | 2019-04-23 15:08:31 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1648010 |
Comment 8
Josh Durgin
2018-12-12 15:49:21 UTC
|