Bug 2190125

Summary: [GSS][ODF 4.11] Unable to attach or mount volumes
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Rafrojas <rafrojas>
Component: csi-driverAssignee: Nobody <nobody>
Status: CLOSED NOTABUG QA Contact: krishnaram Karthick <kramdoss>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.11CC: hekumar, hnallurv, ndevos, nobody, ocs-bugs, odf-bz-bot, ofamera, rar
Target Milestone: ---Keywords: Reopened
Target Release: ---Flags: rafrojas: needinfo? (nobody)
rafrojas: needinfo? (hekumar)
rafrojas: needinfo? (hekumar)
rafrojas: needinfo? (hekumar)
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-16 10:21:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Rafrojas 2023-04-27 08:48:17 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When user testing some feature to rebuild the whole pods in namespace 'cran5' one pod called 'po-cran5-securestorage-0 ' will fail to turn. running due to error 'Unable to attach or mount volumes: unmounted volumes=[secstorage]'

Version of all relevant components (if applicable):
4.11

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, when we try some feature on project

Is there any workaround available to the best of your knowledge?
Yes, the workaround is:
found the pvc :
 Volumes:
   secstorage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sec-storage-pvc
    ReadOnly:   false
oc get pvc -n cran5 sec-storage-pvc  -o yaml | grep -i volumename
volumeName: pvc-97971d9b-8a75-48f9-850f-bfd49980706a

delete the related attachment volume:
[core@master2 ~]$ oc get volumeattachments.storage.k8s.io  | grep pvc-97971d9b-8a75-48f9-850f-bfd49980706a
csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7   openshift-storage.cephfs.csi.ceph.com   pvc-97971d9b-8a75-48f9-850f-bfd49980706a   worker0.hzdc-pz-10-110-10-98.ocp.hz.nsn-rdnet.net    true       21h
[core@master2 ~]$ oc delete volumeattachment csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7
volumeattachment.storage.k8s.io "csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7" deleted

Then the pod will become running automatically

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Everytime that we re-build the project

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 18 Rafrojas 2023-06-16 10:21:36 UTC
Hi Rakshith

  I'll move on that Jira bug, thx!