Description of problem: Local volume is recycle when the app(pod/pvc) is deleted. This is correct behavior. But it recycled again when the app(pod/pvc) is recreated. Version-Release number of selected component (if applicable): 4.2.0-0.nightly-2019-07-28-222114 local-storage-operator.v4.2.0 How reproducible: Always Steps to Reproduce: 1.Deploy local-storage-operator. 2.Make sure there are at least 2 PVs from local volume. $ oc get pv 3.Create a new app. $ oc new-project test ; oc new-app mongodb-persistent 4.Delete the app. $ oc delete project test 5.Check the PVs are available. $ oc get pv 6.Recreate the app. $ oc new-project test ; oc new-app mongodb-persistent 7.Watch the PV status. Actual results: The PV is recycled again. Expected results: The PV should only recycle once. Additional info: The PV status are correct after first time create app. $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 16m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 16m local-pv-c692d3f2 2Gi RWO Delete Available local-filesystem-sc 13s local-pv-f012ba9e 1Gi RWO Delete Bound test/mongodb local-filesystem-sc 9m3s $ oc delete project test project.project.openshift.io "test" deleted $ oc get pv -w NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 17m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 17m local-pv-c692d3f2 2Gi RWO Delete Available local-filesystem-sc 80s local-pv-f012ba9e 1Gi RWO Delete Bound test/mongodb local-filesystem-sc 10m local-pv-f012ba9e 1Gi RWO Delete Released test/mongodb local-filesystem-sc 10m local-pv-f012ba9e 1Gi RWO Delete Terminating test/mongodb local-filesystem-sc 10m local-pv-f012ba9e 1Gi RWO Delete Terminating test/mongodb local-filesystem-sc 10m local-pv-f012ba9e 1Gi RWO Delete Pending local-filesystem-sc 8s local-pv-f012ba9e 1Gi RWO Delete Available local-filesystem-sc 8s $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 18m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 18m local-pv-c692d3f2 2Gi RWO Delete Available local-filesystem-sc 110s local-pv-f012ba9e 1Gi RWO Delete Available local-filesystem-sc 20s $ oc new-project test ...... $ oc new-app mongodb-persistent ...... $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 18m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 18m local-pv-c692d3f2 2Gi RWO Delete Available local-filesystem-sc 2m25s local-pv-f012ba9e 1Gi RWO Delete Available local-filesystem-sc 55s ===================================================== = There, the PV local-pv-f012ba9e is recycle again. = ===================================================== $ oc get pv -w NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 18m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 18m local-pv-c692d3f2 2Gi RWO Delete Bound test/mongodb local-filesystem-sc 2m37s local-pv-f012ba9e 1Gi RWO Delete Available test/mongodb local-filesystem-sc 67s local-pv-f012ba9e 1Gi RWO Delete Released test/mongodb local-filesystem-sc 70s local-pv-f012ba9e 1Gi RWO Delete Terminating test/mongodb local-filesystem-sc 88s local-pv-f012ba9e 1Gi RWO Delete Terminating test/mongodb local-filesystem-sc 88s local-pv-f012ba9e 1Gi RWO Delete Pending local-filesystem-sc 8s local-pv-f012ba9e 1Gi RWO Delete Available local-filesystem-sc 8s $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-158dfe47 2Gi RWO Delete Available local-block-sc 20m local-pv-7f58a50f 1Gi RWO Delete Available local-block-sc 20m local-pv-c692d3f2 2Gi RWO Delete Bound test/mongodb local-filesystem-sc 3m43s local-pv-f012ba9e 1Gi RWO Delete Available local-filesystem-sc 43s
I've attempted to reproduce this using the following steps: 1. Create an AWS cluster. 2. Deployed the local storage operator to it. 3. Created 5 GB and 10 GB volumes for each region (us-east-2a, us-east-2b, us-east-2c), and attached these to /dev/xvdf and /dev/xvdg 4. Ensured the PVs were created: $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-27f48162 10Gi RWO Delete Available local-sc 16m local-pv-4dbb0b9 10Gi RWO Delete Available local-sc 16m local-pv-5ad4e023 5Gi RWO Delete Available local-sc 64s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 64s local-pv-ae271636 5Gi RWO Delete Available local-sc 55s local-pv-f44b845e 10Gi RWO Delete Available local-sc 4s 5. In a different terminal, ran `oc get pv -w`: NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-27f48162 10Gi RWO Delete Available local-sc 16m local-pv-4dbb0b9 10Gi RWO Delete Available local-sc 16m local-pv-5ad4e023 5Gi RWO Delete Available local-sc 64s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 64s local-pv-ae271636 5Gi RWO Delete Available local-sc 55s local-pv-f44b845e 10Gi RWO Delete Available local-sc 4s 6. Executed `oc new-project test ; oc new-app mongodb-persistent` local-pv-70a0d1ad 5Gi RWO Delete Available test/mongodb local-sc 79s local-pv-70a0d1ad 5Gi RWO Delete Bound test/mongodb local-sc 79s 7. Executed `oc delete project test` local-pv-70a0d1ad 5Gi RWO Delete Released test/mongodb local-sc 104s local-pv-70a0d1ad 5Gi RWO Delete Terminating test/mongodb local-sc 2m local-pv-70a0d1ad 5Gi RWO Delete Terminating test/mongodb local-sc 2m local-pv-70a0d1ad 5Gi RWO Delete Pending local-sc 0s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 0s 8. Executed `oc new-project test ; oc new-app mongodb-persistent` local-pv-70a0d1ad 5Gi RWO Delete Available test/mongodb local-sc 14s local-pv-70a0d1ad 5Gi RWO Delete Bound test/mongodb local-sc 14s I repeated these steps a couple of times, and haven't seen more than a single PV, local-pv-70a0d1ad, recycled. It goes through the Available -> Bound -> Release -> Terminating -> Pending -> Available states, but I don't see any other PVs report Terminating, and I don't see it enter the Terminating state twice. Can you provide additional information regarding how this was reproduced? Note: I also tried deleting the `dc/mongodb` to delete the app first. The PVC remained bound until the project was deleted; however, it still didn't reproduce the reported issue.
I'm not sure if they are related to multiple schedulers as state in https://bugzilla.redhat.com/show_bug.cgi?id=1734673#c11, but those issues are found on the same cluster. Working on a 3.11 hot-fix, will try reproduce and provide the environments for debug later.
@Liang, Have you been able to reproduce this now that the multiple scheduler issue in https://bugzilla.redhat.com/show_bug.cgi?id=1734673 is resolved?
This issue should be caused by the multiple active schedulers. Can reproduce this issue in the cluster with multiple active schedulers. Cannot reproduce this issue in the new cluster with bug 1734673 fix.
Considering that the fix from bug 1734673 addressed the issue, and it can no longer be reproduced, I'm going to close this bug out as a duplicate. *** This bug has been marked as a duplicate of bug 1734673 ***