Red Hat Bugzilla – Bug 1257307
[RFE] Support ceph-disk zapping drives when deploying over existing Ceph nodes
Last modified: 2016-09-23 17:35:32 EDT
Description of problem:
If a node has had Ceph on it before and you deploy a fresh Ceph node over it, the deploy fails. To remedy this, currently we need to manually boot each node and "ceph-disk zap /dev/sdFOO" the disks before the deploy can succeed again.
Version-Release number of selected component (if applicable):
Director GA w/ 0day
Steps to Reproduce:
1. Deploy Overcloud w/ 1 or more ceph nodes
2. Delete the stack
3. Deploy again with the same configuration, reusing the hardware from the last deploy
Deploy fails to create ceph nodes.
Deploy succeeds, overwriting the old ceph disks.
If doing this by default is too much of a change from existing behavior, consider adding a ForceDiskZap parameter to storage-environment.yml that will zap and re-read the partition table when provisioning the nodes.
This has affected PoC deploys at several customer sites.
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
*** This bug has been marked as a duplicate of bug 1256103 ***
*** This bug has been marked as a duplicate of bug 1377867 ***