Bug 1906935 - Delete resources when Provisioning CR is deleted
Summary: Delete resources when Provisioning CR is deleted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Bare Metal Hardware Provisioning
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.7.0
Assignee: Beth White
QA Contact: Sasha Smolyak
URL:
Whiteboard:
: 1906934 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-11 20:41 UTC by sdasu
Modified: 2021-02-24 15:43 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:43:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-baremetal-operator pull 71 0 None closed Bug 1906935: Delete resources when Provisioning CR is deleted 2021-01-11 11:21:08 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:43:26 UTC

Description sdasu 2020-12-11 20:41:11 UTC
Description of problem:
When Provisioning CR is deleted, all resources created by the cluster-baremetal-operator needs to be deleted.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 sdasu 2020-12-11 21:06:26 UTC
*** Bug 1906934 has been marked as a duplicate of this bug. ***

Comment 4 Sasha Smolyak 2021-01-05 10:55:35 UTC
Test plan:

1. Observe the provisioning:
oc get provisioning provisioning-congiguration -o yaml
Save the yaml as new-provisioning.yaml

2.Delete the provisioning CR:
oc delete provisioning provisioning-configuration

3. Check that CR is deleted:
oc get provisioning	
There are no resources to display

4. Check the pods list in openshift-machine-api namespace:
oc get pods -n openshift-machine-api 	
The metal3 pods (metal3 and metal3-image-cache) switch to Terminating and then disappear

5. Check the clusteroperator:
oc get clusteroperator 	
The baremetal clusteroperator switches to Available: False

6. Check the deployment, the metal3 deployment is supposed to be down:
oc get deploy -n openshift-machine-api 	
There is no metal3 deploy in the list

7. Check the serviceaccount:
oc get sa -n openshift-machine-api 	
cluster-baremetal-operator sa is deleted

8. Observe the metal3-state service 	
The metal3-state service is down

9. Observe the secrets:
oc get secrets 	
Metal3 secrets are deleted

10. Restore the provisioning:
oc apply -f new-provisioning.yaml
The provisioning and all the resources are restored 

The actual results:
1. metal3-service wasn't found so it's deletion wasn't tested
2. There is cluster-baremetal-operator ServiceAccount, it remains after deletion of provisioning. 
@sadasu, is this an OK behavior?

Comment 5 Sasha Smolyak 2021-01-05 16:37:29 UTC
SA is supposed to remain up, service was tested to go down. Verified

Comment 7 errata-xmlrpc 2021-02-24 15:43:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.