Bug 2190125 - [GSS][ODF 4.11] Unable to attach or mount volumes [NEEDINFO]
Summary: [GSS][ODF 4.11] Unable to attach or mount volumes
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nobody
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-27 08:48 UTC by Rafrojas
Modified: 2023-08-09 16:37 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-16 10:21:58 UTC
Embargoed:
rafrojas: needinfo? (nobody)
rafrojas: needinfo? (hekumar)
rafrojas: needinfo? (hekumar)
rafrojas: needinfo? (hekumar)


Attachments (Terms of Use)

Description Rafrojas 2023-04-27 08:48:17 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When user testing some feature to rebuild the whole pods in namespace 'cran5' one pod called 'po-cran5-securestorage-0 ' will fail to turn. running due to error 'Unable to attach or mount volumes: unmounted volumes=[secstorage]'

Version of all relevant components (if applicable):
4.11

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes, when we try some feature on project

Is there any workaround available to the best of your knowledge?
Yes, the workaround is:
found the pvc :
 Volumes:
   secstorage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sec-storage-pvc
    ReadOnly:   false
oc get pvc -n cran5 sec-storage-pvc  -o yaml | grep -i volumename
volumeName: pvc-97971d9b-8a75-48f9-850f-bfd49980706a

delete the related attachment volume:
[core@master2 ~]$ oc get volumeattachments.storage.k8s.io  | grep pvc-97971d9b-8a75-48f9-850f-bfd49980706a
csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7   openshift-storage.cephfs.csi.ceph.com   pvc-97971d9b-8a75-48f9-850f-bfd49980706a   worker0.hzdc-pz-10-110-10-98.ocp.hz.nsn-rdnet.net    true       21h
[core@master2 ~]$ oc delete volumeattachment csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7
volumeattachment.storage.k8s.io "csi-2ff7305f4fbd37d07b336fd3f5c8cbb65410e7d0e24204dc5370b4b8a3c54cd7" deleted

Then the pod will become running automatically

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Everytime that we re-build the project

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 18 Rafrojas 2023-06-16 10:21:36 UTC
Hi Rakshith

  I'll move on that Jira bug, thx!


Note You need to log in before you can comment on or make changes to this bug.