Bug 1734612
Summary: | Local volume recycled again after recreate app. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Liang Xia <lxia> |
Component: | Storage | Assignee: | Christian Huffman <chuffman> |
Status: | CLOSED DUPLICATE | QA Contact: | Liang Xia <lxia> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 4.2.0 | CC: | aos-bugs, aos-storage-staff, bchilds, chaoyang, piqin |
Target Milestone: | --- | ||
Target Release: | 4.2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-09-16 13:13:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Liang Xia
2019-07-31 05:59:09 UTC
I've attempted to reproduce this using the following steps: 1. Create an AWS cluster. 2. Deployed the local storage operator to it. 3. Created 5 GB and 10 GB volumes for each region (us-east-2a, us-east-2b, us-east-2c), and attached these to /dev/xvdf and /dev/xvdg 4. Ensured the PVs were created: $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-27f48162 10Gi RWO Delete Available local-sc 16m local-pv-4dbb0b9 10Gi RWO Delete Available local-sc 16m local-pv-5ad4e023 5Gi RWO Delete Available local-sc 64s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 64s local-pv-ae271636 5Gi RWO Delete Available local-sc 55s local-pv-f44b845e 10Gi RWO Delete Available local-sc 4s 5. In a different terminal, ran `oc get pv -w`: NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-27f48162 10Gi RWO Delete Available local-sc 16m local-pv-4dbb0b9 10Gi RWO Delete Available local-sc 16m local-pv-5ad4e023 5Gi RWO Delete Available local-sc 64s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 64s local-pv-ae271636 5Gi RWO Delete Available local-sc 55s local-pv-f44b845e 10Gi RWO Delete Available local-sc 4s 6. Executed `oc new-project test ; oc new-app mongodb-persistent` local-pv-70a0d1ad 5Gi RWO Delete Available test/mongodb local-sc 79s local-pv-70a0d1ad 5Gi RWO Delete Bound test/mongodb local-sc 79s 7. Executed `oc delete project test` local-pv-70a0d1ad 5Gi RWO Delete Released test/mongodb local-sc 104s local-pv-70a0d1ad 5Gi RWO Delete Terminating test/mongodb local-sc 2m local-pv-70a0d1ad 5Gi RWO Delete Terminating test/mongodb local-sc 2m local-pv-70a0d1ad 5Gi RWO Delete Pending local-sc 0s local-pv-70a0d1ad 5Gi RWO Delete Available local-sc 0s 8. Executed `oc new-project test ; oc new-app mongodb-persistent` local-pv-70a0d1ad 5Gi RWO Delete Available test/mongodb local-sc 14s local-pv-70a0d1ad 5Gi RWO Delete Bound test/mongodb local-sc 14s I repeated these steps a couple of times, and haven't seen more than a single PV, local-pv-70a0d1ad, recycled. It goes through the Available -> Bound -> Release -> Terminating -> Pending -> Available states, but I don't see any other PVs report Terminating, and I don't see it enter the Terminating state twice. Can you provide additional information regarding how this was reproduced? Note: I also tried deleting the `dc/mongodb` to delete the app first. The PVC remained bound until the project was deleted; however, it still didn't reproduce the reported issue. I'm not sure if they are related to multiple schedulers as state in https://bugzilla.redhat.com/show_bug.cgi?id=1734673#c11, but those issues are found on the same cluster. Working on a 3.11 hot-fix, will try reproduce and provide the environments for debug later. @Liang, Have you been able to reproduce this now that the multiple scheduler issue in https://bugzilla.redhat.com/show_bug.cgi?id=1734673 is resolved? This issue should be caused by the multiple active schedulers. Can reproduce this issue in the cluster with multiple active schedulers. Cannot reproduce this issue in the new cluster with bug 1734673 fix. Considering that the fix from bug 1734673 addressed the issue, and it can no longer be reproduced, I'm going to close this bug out as a duplicate. *** This bug has been marked as a duplicate of bug 1734673 *** |