Bug 1312190 - heat stack-delete does not clean up CEPH disks
Summary: heat stack-delete does not clean up CEPH disks
Keywords:
Status: CLOSED DUPLICATE of bug 1377867
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
: 10.0 (Newton)
Assignee: John Fulton
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-26 03:27 UTC by Mark Wagner
Modified: 2016-10-18 12:08 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-10-18 12:08:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1370439 0 unspecified CLOSED Puppet should exit with error if disk activate fails 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1377867 0 high CLOSED Prepare OSDs with GPT label to help ensure Ceph deployment will succeed 2023-09-14 03:31:10 UTC

Description Mark Wagner 2016-02-26 03:27:23 UTC
Description of problem:
When deleting a deployment that uses individual CEPH storage devices, the devices do not get cleaned up This causes them to retain their CEPH characteristics which prevents them from being reused in future deployments

Version-Release number of selected component (if applicable):
heat packages in 2015.1.2-9.el7ost

How reproducible:
Everytime

Steps to Reproduce:
1.Deploy a cloud which uses CEPH not on the default system disk
2.use `heat stack-delete` to delete the deployment
3.Try to redeploy and observe that the CEPG storage is not used

Actual results:
No CEPH OSDs created

Expected results:

Fully deployed and functional CEPH storage

Additional info:

Comment 2 Mike Burns 2016-04-07 21:11:06 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 3 John Fulton 2016-10-12 16:50:14 UTC
Mark,

Would you please have a look at this bug which was resolved in OSP10? 

 https://bugzilla.redhat.com/show_bug.cgi?id=1370439

As per the above, the devices which retain their Ceph characteristics are identified and the deployment fails and indicates why they are unable to be reused in future deployments in the logs. From there the plan is to give the user a new option described in: 

 https://bugzilla.redhat.com/show_bug.cgi?id=1377867

so that the user can have OSPd clean the disk during the deployment. 

In a situation where one is practicing deployments and rebuilding the overcloud often, this new zap option could simply be left on. If this is your situation, then would this address the problem? If so, then I would update this BZ as a duplicate of BZ 1377867. 

Until this option is added, our documentation covers how to use a preboot script [1] to do the zap. 

Thanks,
  John

[1] https://access.redhat.com/documentation/en/red-hat-openstack-platform/9/single/red-hat-ceph-storage-for-the-overcloud/#Formatting_Ceph_Storage_Nodes_Disks_to_GPT

Comment 4 John Fulton 2016-10-18 12:08:40 UTC
It's been about a week. So I'm closing this as a duplicate as mentioned in my previous comment.

*** This bug has been marked as a duplicate of bug 1377867 ***


Note You need to log in before you can comment on or make changes to this bug.