Bug 1312190 - heat stack-delete does not clean up CEPH disks
heat stack-delete does not clean up CEPH disks
Status: CLOSED DUPLICATE of bug 1377867
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
x86_64 Linux
unspecified Severity unspecified
: ---
: 10.0 (Newton)
Assigned To: John Fulton
Yogev Rabl
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-25 22:27 EST by Mark Wagner
Modified: 2016-10-18 08:08 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-10-18 08:08:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Bugzilla 1370439 None None None 2016-10-12 12:50 EDT
Red Hat Bugzilla 1377867 None None None 2016-10-12 12:55 EDT

  None (edit)
Description Mark Wagner 2016-02-25 22:27:23 EST
Description of problem:
When deleting a deployment that uses individual CEPH storage devices, the devices do not get cleaned up This causes them to retain their CEPH characteristics which prevents them from being reused in future deployments

Version-Release number of selected component (if applicable):
heat packages in 2015.1.2-9.el7ost

How reproducible:
Everytime

Steps to Reproduce:
1.Deploy a cloud which uses CEPH not on the default system disk
2.use `heat stack-delete` to delete the deployment
3.Try to redeploy and observe that the CEPG storage is not used

Actual results:
No CEPH OSDs created

Expected results:

Fully deployed and functional CEPH storage

Additional info:
Comment 2 Mike Burns 2016-04-07 17:11:06 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 3 John Fulton 2016-10-12 12:50:14 EDT
Mark,

Would you please have a look at this bug which was resolved in OSP10? 

 https://bugzilla.redhat.com/show_bug.cgi?id=1370439

As per the above, the devices which retain their Ceph characteristics are identified and the deployment fails and indicates why they are unable to be reused in future deployments in the logs. From there the plan is to give the user a new option described in: 

 https://bugzilla.redhat.com/show_bug.cgi?id=1377867

so that the user can have OSPd clean the disk during the deployment. 

In a situation where one is practicing deployments and rebuilding the overcloud often, this new zap option could simply be left on. If this is your situation, then would this address the problem? If so, then I would update this BZ as a duplicate of BZ 1377867. 

Until this option is added, our documentation covers how to use a preboot script [1] to do the zap. 

Thanks,
  John

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/red-hat-ceph-storage-for-the-overcloud/#Formatting_Ceph_Storage_Nodes_Disks_to_GPT
Comment 4 John Fulton 2016-10-18 08:08:40 EDT
It's been about a week. So I'm closing this as a duplicate as mentioned in my previous comment.

*** This bug has been marked as a duplicate of bug 1377867 ***

Note You need to log in before you can comment on or make changes to this bug.