Bug 1722567 - [RFE] provide simple template "CephDestroyExistingData" option to zeroize ceph disks if data/partitions already exist
Summary: [RFE] provide simple template "CephDestroyExistingData" option to zeroize cep...
Keywords:
Status: CLOSED DUPLICATE of bug 1613918
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph-ansible
Version: 13.0 (Queens)
Hardware: All
OS: All
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Guillaume Abrioux
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-20 16:22 UTC by Joe Antkowiak
Modified: 2019-06-26 16:32 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-26 12:48:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Joe Antkowiak 2019-06-20 16:22:22 UTC
Description of problem:

Could we provide a parameter to pass to ceph-ansible, something like "CephDestroyExistingData" that when set to true, would automatically zeroize the first 512M of each device used for ceph

This would get around the extra time spent wiping drives and doing node cleaning when going through multiple deploy attempts.

Comment 1 Giulio Fidente 2019-06-20 16:47:27 UTC
In OSP13 it is possible to instruct Ironic to erase the disks metadata only [1]

1. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/bare_metal_provisioning/sect-configure#manual_node_cleaning

Comment 2 John Fulton 2019-06-26 12:48:36 UTC

*** This bug has been marked as a duplicate of bug 1613918 ***

Comment 3 Joe Antkowiak 2019-06-26 16:32:57 UTC
Thanks, was not aware of the metadata-only option


Note You need to log in before you can comment on or make changes to this bug.