Description of problem: Local Storage Operator PV content is not deleted between multiple delete/re-create of pods and PVCs using the PVs. Version-Release number of selected component (if applicable): 4.10.0-rc.3 local-storage-operator.4.10.0-202202160023 How reproducible: 100% Steps to Reproduce: 1. On a single node openshift node with LSO installed create multiple pods and PVCs using PVs created by LSO as described in the attached ztp_du_local_volumes_content_cleanup_timestamp.yaml manifests. 2. Delete and recreate the objects multiple time: oc delete -f ztp_du_local_volumes_content_cleanup_timestamp.yaml oc apply -f ztp_du_local_volumes_content_cleanup_timestamp.yaml 3. Re-create the objects without writing to /data/ volume per attached manifest: ztp_du_local_volumes_content_cleanup_notimestamp.yaml 4. Validate on all pods that there's no leftover content from step 2 in /data directory: Actual results: Some of the pods but not all show content written by pods from previous iterations: [kni ~]$ oc -n ztp-testns exec -it hello-world-0 -- ls /data [kni ~]$ oc -n ztp-testns exec -it hello-world-23 -- ls /data 2022-02-22T14:54:52+0000.txt Expected results: empty /data directory for all the pods Additional info: Attaching must-gather. Please let me know if additional info is required.
Can you please attach ztp_du_local_volumes_content_cleanup_timestamp.yaml and ztp_du_local_volumes_content_cleanup_notimestamp.yaml? And LSO must-gather logs, https://docs.openshift.com/container-platform/4.9/support/gathering-cluster-data.html#gathering-data-specific-features_gathering-cluster-data (and check for ose-local-storage-mustgather-rhel8 image) This looks similar to https://bugzilla.redhat.com/show_bug.cgi?id=2032924, but we need logs to prove that.
Created attachment 1863043 [details] ztp_du_local_volumes_content_cleanup_notimestamp.yaml
Created attachment 1863044 [details] ztp_du_local_volumes_content_cleanup_timestamp.yaml
We think it's a dup of 2052756. Please try with the latest 4.10 nightly or with the next 4.9.z and let us know if it's still reproducible there. *** This bug has been marked as a duplicate of bug 2052756 ***