Description of problem (please be detailed as possible and provide log snippests): OCS 4.5 improves the uninstall experience and simplifies some of the tasks. Version of all relevant components (if applicable): 4.5 1. At the very minimum, the deletion of the storage classes is now automatic. 2. The node labels and the taints are also removed when the StorageCluster is removed. 3. Rook deletes the datadir on the hosts if a confirmation is provided in the config of StorageCluster. More changes, if any, will be provided here in the bug.
Yes, the cleanup policy in 4.5 does a metadata wipe on the drives, which means they can be re-used to install a cluster but not all the data have been removed. It's quick wipe.
(In reply to leseb from comment #6) > Yes, the cleanup policy in 4.5 does a metadata wipe on the drives, which > means they can be re-used to install a cluster but not all the data have > been removed. It's quick wipe. Is the cleanup policy you mentioned part of this ? """"" The StorageCluster should have the label set cleanup.ocs.openshift.io="yes-really-destroy-data". If this label is set on the StorageCluster before the deletion request, then rook deletes the datadir automatically. """
Thank You Talur for the steps. Some comments & queries: > 1. Insert as step 1: > ======================================================================================================= > “Add the label “cleanup.ocs.openshift.io=yes-really-destroy-data” to the StorageCluster. > ``` oc label -n openshift-storage storagecluster --all cleanup.ocs.openshift.io=yes-really-destroy-data 1a). For LSO deployments, Will this script only wipe those disks which were used for OSDs and leave the rest as it is ? 1b) The MON uses /var/lib/rook, so +1 that it will delete the DataDirHostPath content > 2. Current Step: #5: Delete the StorageCluster object. 2a) Step #5: Since StorageCluster deletion deletes the SC and Label, don't we need to remove the explicit steps 10 and 11 ? Current Step 10 and 11: >Delete the storage classes with an openshift-storage provisioner listed in step 1. > Unlabel the storage nodes. 2b) If the StorageClass gets deleted before deletion of namespace(current Step #6) - we may hit this issue - need to test to confirm AFAIU the noobaa-db PVC deletion will get stuck in the absence of the SC, resulting in the namespace stuck in Terminating state(due to leftover resource). But will test and confirm >3 Current Step 9: Wipe the disks for each of the local volumes listed in step 4 so that they can be reused Do we need to skip this if we added the new Step#1 to use the "cleanup.ocs" label in StorageCLuster ?Will it not wipe the disks automatically on StorageCluster deletion ? ___________________________________________________________________________________________________________ > 4. Current Step #4 List and note the backing local volume objects. If no results found, then skip step 8 & 9. Include command to list the OCS nodes AI: Move 9 i) to 4 and add a note in 9ii to use the nodes from Step 4 Current 9. i)List the storage nodes. oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
Adding bug 1860431 as a blocker for this BZ, since the code for that merging will affect the documentation for uninstallation. The following BZs were targeted for OCS 4.5, and have been moved to OCS 4.6 with the understanding that the documentation will be updated with workarounds: * Bug 1860418 - OCS 4.5 Uninstall: Deleting StorageCluster leaves Noobaa-db PV in Released state(secret not found) * Bug 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining Talur, please update this BZ with the required workarounds for UI and CLI.
Me and Sidhant verified the content in the https://docs.google.com/document/d/1BYMZFdyhXC8FMEe3lKlonKkUDmeDM_JsQ1uzLAP-Gns/edit# and it looks good to us. Please share the preview link for final review. thanks for all the help.
*** Bug 1870648 has been marked as a duplicate of this bug. ***
One minor change Replace "If you have created any PVCs as a part of configuring the monitoring stack" with "If you have created PVCs as part of configuring the monitoring stack"
(In reply to Neha Berry from comment #27) > One minor change > > Replace "If you have created any PVCs as a part of configuring the > monitoring stack" with "If you have created PVCs as part of configuring > the monitoring stack" The gdoc is correct but the preview doc also needs the same change. Please ignore the extra spaces in the above comment Step 2: "If you have created PVCs as part of configuring the monitoring stack" -
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days