Description of problem: After deleting the the pod/pvc which consume the pv provisioned by localvolumeset, the pv stays in "Released" status and could not be reused Version-Release number of selected component (if applicable): Cluster version: 4.8.0-0.nightly-2021-03-08-184701 local-storage-operator: 4.7.0-202102110027.p0 How reproducible: Always Steps to Reproduce: 1. Install cluster on AWS 2. Attach one additional volume to one worker 3. Create local storage operator 4. Create localvolumeset, and the pv is provisioned $ oc get localvolumeset lvs -o yaml apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet name: lvs namespace: openshift-local-storage spec: deviceInclusionSpec: deviceTypes: - disk - part minSize: 1Gi nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-152-95 - ip-10-0-162-63 - ip-10-0-208-202 storageClassName: local-storage-set-sc volumeMode: Filesystem status: conditions: - lastTransitionTime: "2021-03-09T12:36:23Z" message: 'DiskMaker: Available' status: "True" type: DaemonSetsAvailable - lastTransitionTime: "2021-03-09T12:36:23Z" message: Operator reconciled successfully. status: "True" type: Available observedGeneration: 1 totalProvisionedDeviceCount: 1 $ oc get sc local-storage-set-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-storage-set-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 103s $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-acc31e87 2Gi RWO Delete Available local-storage-set-sc 0s 5. Create pod/pvc to consume this pv $ oc create -f temp.yaml pod/mypod created persistentvolumeclaim/mypvc created 6. Delete pod/pvc $ oc delete -f temp.yaml pod "mypod" deleted persistentvolumeclaim "mypvc" deleted 7. Check pv status $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-acc31e87 2Gi RWO Delete Released wduan/mypvc local-storage-set-sc 16m Actual results: pv is in "Released" status Expected results: With the "Delete" reclaimPolicy, pv should be deleted and provisioned again. Master Log: Node Log (of failed PODs): PV Dump: $ oc get pv local-pv-acc31e87 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: local-volume-provisioner-ip-10-0-152-95.us-east-2.compute.internal-5a38f4d9-e5fb-4da4-93ca-f4f6de35b975 creationTimestamp: "2021-03-09T12:37:28Z" finalizers: - kubernetes.io/pv-protection labels: kubernetes.io/hostname: ip-10-0-152-95 storage.openshift.com/device-id: nvme-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e storage.openshift.com/device-name: nvme1n1 storage.openshift.com/owner-kind: LocalVolumeSet storage.openshift.com/owner-name: lvs storage.openshift.com/owner-namespace: openshift-local-storage managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:labels: .: {} f:kubernetes.io/hostname: {} f:storage.openshift.com/device-id: {} f:storage.openshift.com/device-name: {} f:storage.openshift.com/owner-kind: {} f:storage.openshift.com/owner-name: {} f:storage.openshift.com/owner-namespace: {} f:ownerReferences: .: {} k:{"uid":"5a38f4d9-e5fb-4da4-93ca-f4f6de35b975"}: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:local: .: {} f:path: {} f:nodeAffinity: .: {} f:required: .: {} f:nodeSelectorTerms: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: diskmaker operation: Update time: "2021-03-09T12:37:28Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2021-03-09T12:37:28Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:pv.kubernetes.io/bound-by-controller: {} f:spec: f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} manager: kube-scheduler operation: Update time: "2021-03-09T12:38:34Z" name: local-pv-acc31e87 ownerReferences: - apiVersion: v1 kind: Node name: ip-10-0-152-95.us-east-2.compute.internal uid: 5a38f4d9-e5fb-4da4-93ca-f4f6de35b975 resourceVersion: "49271" uid: ee8ed1a3-50c9-45f3-a90a-7d58ee9aa75b spec: accessModes: - ReadWriteOnce capacity: storage: 2Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: mypvc namespace: wduan resourceVersion: "48751" uid: 5b991d8e-10e2-4f0b-a864-c7bd5663a60c local: path: /mnt/local-storage/local-storage-set-sc/nvme-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-152-95 persistentVolumeReclaimPolicy: Delete storageClassName: local-storage-set-sc volumeMode: Filesystem status: phase: Released PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: diskmaker-manager log: repeat with following contents: {"level":"info","ts":1615294889.8517015,"logger":"localvolumeset-symlink-controller","msg":"Reconciling LocalVolumeSet","Request.Namespace":"openshift-local-storage","Request.Name":"lvs"} I0309 13:01:29.852052 1 common.go:334] StorageClass "local-storage-set-sc" configured with MountDir "/mnt/local-storage/local-storage-set-sc", HostDir "/mnt/local-storage/local-storage-set-sc", VolumeMode "Filesystem", FsType "", BlockCleanerCommand ["/scripts/quick_reset.sh"] {"level":"info","ts":1615294889.8912933,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme0n1","filter.Name":"noChildren"} {"level":"info","ts":1615294889.8913212,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme0n1p1","filter.Name":"noBiosBootInPartLabel"} {"level":"info","ts":1615294889.8913405,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme0n1p2","filter.Name":"noFilesystemSignature"} {"level":"info","ts":1615294889.8913474,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme0n1p3","filter.Name":"noFilesystemSignature"} {"level":"info","ts":1615294889.8913662,"logger":"localvolumeset-symlink-controller","msg":"filter negative","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme0n1p4","filter.Name":"noFilesystemSignature"} {"level":"info","ts":1615294889.8914752,"logger":"localvolumeset-symlink-controller","msg":"matched disk","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme1n1"} {"level":"info","ts":1615294889.8921072,"logger":"localvolumeset-symlink-controller","msg":"provisioning PV","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme1n1"} {"level":"info","ts":1615294889.8937852,"logger":"localvolumeset-symlink-controller","msg":"creating","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme1n1","pv.Name":"local-pv-acc31e87"} {"level":"info","ts":1615294889.8942015,"logger":"localvolumeset-symlink-controller","msg":"provisioning succeeded","Request.Namespace":"openshift-local-storage","Request.Name":"lvs","Device.Name":"nvme1n1"} I0309 13:01:29.894257 1 deleter.go:195] Start cleanup for pv local-pv-acc31e87 I0309 13:01:29.894327 1 deleter.go:275] Deleting PV block volume "local-pv-acc31e87" device hostpath "/mnt/local-storage/local-storage-set-sc/nvme-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e", mountpath "/mnt/local-storage/local-storage-set-sc/nvme-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e" I0309 13:01:29.897943 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Calling mkfs" I0309 13:01:29.899997 1 deleter.go:319] Cleanup pv "local-pv-acc31e87": StderrBuf - "mke2fs 1.45.6 (20-Mar-2020)" I0309 13:01:29.901713 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Creating filesystem with 524288 4k blocks and 131072 inodes" I0309 13:01:29.901729 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Filesystem UUID: d3f33907-a02b-4ff7-ad22-bfeabb62f9ba" I0309 13:01:29.901736 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Superblock backups stored on blocks: " I0309 13:01:29.901745 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "\t32768, 98304, 163840, 229376, 294912" I0309 13:01:29.901751 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "" I0309 13:01:29.901821 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Allocating group tables: 0/16\b\b\b\b\b \b\b\b\b\bdone " I0309 13:01:29.971568 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Writing inode tables: 0/16\b\b\b\b\b \b\b\b\b\bdone " I0309 13:01:29.980440 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Writing superblocks and filesystem accounting information: 0/16\b\b\b\b\b \b\b\b\b\bdone" I0309 13:01:29.980463 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "" I0309 13:01:29.980469 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Calling wipefs" I0309 13:01:29.997985 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "/mnt/local-storage/local-storage-set-sc/nvme-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e: 2 bytes were erased at offset 0x00000438 (ext2): 53 ef" I0309 13:01:29.998011 1 deleter.go:305] Cleanup pv "local-pv-acc31e87": StdoutBuf - "Quick reset completed" I0309 13:01:29.998064 1 deleter.go:283] Completed cleanup of pv "local-pv-acc31e87" volume is Available in localvolumediscoveryresult: - deviceID: /dev/disk/by-id/nvme$ oc get localvolumediscoveryresult -n openshift-local-storage-Amazon_Elastic_Block_Store_vol0aae98a99ce01086e fstype: "" model: 'Amazon Elastic Block Store ' path: /dev/nvme1n1 property: NonRotational serial: vol0aae98a99ce01086e size: 2147483648 status: state: Available type: disk vendor: "" discoveredTimeStamp: "2021-03-09T12:44:43Z"
Rohan, can you please take a look and assign to appropriate person? It might be caused by the daemonset refactoring.
Have been investigating, forgot to comment here. Please try restarting the diskmaker daemons as a workaround: `oc delete po -l app=diskmaker-manager`
Could not reproduce after many attempts. Looks like this has been fixed in the last PR that touched the deleter: https://github.com/openshift/local-storage-operator/pull/219
Waiting for this bug https://bugzilla.redhat.com/show_bug.cgi?id=1952820
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438