Description of problem: If a node has had Ceph on it before and you deploy a fresh Ceph node over it, the deploy fails. To remedy this, currently we need to manually boot each node and "ceph-disk zap /dev/sdFOO" the disks before the deploy can succeed again. Version-Release number of selected component (if applicable): Director GA w/ 0day How reproducible: Every time Steps to Reproduce: 1. Deploy Overcloud w/ 1 or more ceph nodes 2. Delete the stack 3. Deploy again with the same configuration, reusing the hardware from the last deploy Actual results: Deploy fails to create ceph nodes. Expected results: Deploy succeeds, overwriting the old ceph disks. Additional info: If doing this by default is too much of a change from existing behavior, consider adding a ForceDiskZap parameter to storage-environment.yml that will zap and re-read the partition table when provisioning the nodes. This has affected PoC deploys at several customer sites.
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
*** This bug has been marked as a duplicate of bug 1256103 ***
*** This bug has been marked as a duplicate of bug 1377867 ***