Bug 1744958
| Summary: | Pod fails to move to state "Running" due to incorrect "FailedMount" event | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Shyamsundar <srangana> | ||||
| Component: | Storage | Assignee: | Fabio Bertinatto <fbertina> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Liang Xia <lxia> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.2.0 | CC: | aos-bugs, aos-storage-staff, bchilds, hchiramm, jstrunk, mrajanna | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.3.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | ocs-monkey | ||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-08-29 14:47:50 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Shyamsundar
2019-08-23 11:23:56 UTC
The last log message from the Kubelet was: > Aug 22 14:16:37 ip-10-0-149-116 hyperkube[1163]: I0822 14:16:37.603104 1163 operation_generator.go:623] MountVolume.MountDevice succeeded for volume "pvc-fdd30dfe-c4e6-11e9-926c-02d05bd2f570" (UniqueName: "kubernetes.io/csi/rook-ceph.rbd.csi.ceph.com^0001-0009-rook-ceph-0000000000000001-fde9dd57-c4e6-11e9-ad2b-0a580a83000d") pod "osio-worker-375766658-7f4cf9698c-mjtgp" (UID: "643b767a-c4e7-11e9-a719-0ac717fcefd8") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdd30dfe-c4e6-11e9-926c-02d05bd2f570/globalmount" The following message should be something around the lines of: > MountVolume.SetUp succeeded for volume... Since it was shown in the output of journalctl, I believe the MountVolume.SetUp call was still running by the time the logs were taken. That's where the file ownership is set, so I believe this is a duplicate of bug #1745773. Can you confirm it's a duplicate so we can close this? (In reply to Fabio Bertinatto from comment #2) > The last log message from the Kubelet was: > > > Aug 22 14:16:37 ip-10-0-149-116 hyperkube[1163]: I0822 14:16:37.603104 1163 operation_generator.go:623] MountVolume.MountDevice succeeded for volume "pvc-fdd30dfe-c4e6-11e9-926c-02d05bd2f570" (UniqueName: "kubernetes.io/csi/rook-ceph.rbd.csi.ceph.com^0001-0009-rook-ceph-0000000000000001-fde9dd57-c4e6-11e9-ad2b-0a580a83000d") pod "osio-worker-375766658-7f4cf9698c-mjtgp" (UID: "643b767a-c4e7-11e9-a719-0ac717fcefd8") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-fdd30dfe-c4e6-11e9-926c-02d05bd2f570/globalmount" > > The following message should be something around the lines of: > > > MountVolume.SetUp succeeded for volume... > > Since it was shown in the output of journalctl, I believe the > MountVolume.SetUp call was still running by the time the logs were taken. > That's where the file ownership is set, so I believe this is a duplicate of > bug #1745773. > > Can you confirm it's a duplicate so we can close this? I did not wait long enough to see if the pod never came up or not by the looks of it. It also seems related to the other bug. Closing this as duplicate and will reopen/rekindle the conversation if I see a stuck pod for much longer duration. Thanks for taking a look at this. *** This bug has been marked as a duplicate of bug 1745773 *** |