Bug 1711200
Summary: | Container fails to start with "no space left on device" even there is enough storage available | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Elvir Kuric <ekuric> |
Component: | Containers | Assignee: | Giuseppe Scrivano <gscrivan> |
Status: | CLOSED ERRATA | QA Contact: | Elvir Kuric <ekuric> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.1.0 | CC: | aos-bugs, aos-storage-staff, dwalsh, ebenahar, eparis, jokerman, jsafrane, mbukatov, mmccomas, mpatel, ppitonak, pthomas, sponnaga, trankin, tsmetana, umohnani, wjiang, xtian |
Target Milestone: | --- | ||
Target Release: | 4.2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
rshared might cause to recursively mount /sys on top of itself
Consequence:
Container fails to start with "no space left on device"
Fix:
Prevent that there are recursive /sys mounts on top of each other
Result:
Containers run correctly with "rshared: true"
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-10-16 06:28:56 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Elvir Kuric
2019-05-17 08:03:13 UTC
It looks more like container runtime related problem if it can't mount stuff into the containers rootfs. What version of cri-o is this on? Anything that is not 1.13.6 or 1.13.9 has some storage issues. (In reply to Urvashi Mohnani from comment #2) > What version of cri-o is this on? Anything that is not 1.13.6 or 1.13.9 has > some storage issues. from : 4.1.0-0.nightly-2019-04-22-005054 it gives on node cri-o-1.13.6-4.rhaos4.1.gita4b40b7.el8.x86_64 criu-3.10-7.el8.x86_64 cri-tools-1.13.0-1.rhaos4.1.gitc06001f.el8.x86_64 Can we access the cluster? *** Bug 1734493 has been marked as a duplicate of this bug. *** Hi ekuric, help have a check with cri-o-1.13.10-0.2.dev.rhaos4.1.git3253f9f, I tried to follow the doc https://rook.io/docs/rook/master/ceph-csi-drivers.html, but got lots of issues. (In reply to weiwei jiang from comment #23) > Hi ekuric, help have a check with cri-o-1.13.10-0.2.dev.rhaos4.1.git3253f9f, > I tried to follow the doc > https://rook.io/docs/rook/master/ceph-csi-drivers.html, > but got lots of issues. I think I do not follow what issue here might be, you have troubles to setup ceph with csi following https://rook.io/docs/rook/master/ceph-csi-drivers.html ? Verified on 4.2.0-0.nightly-2019-09-05-234433
> oc get all -n rook-ceph
NAME READY STATUS RESTARTS AGE
pod/csi-cephfsplugin-dxzd7 3/3 Running 0 7m6s
pod/csi-cephfsplugin-gthb7 3/3 Running 0 7m6s
pod/csi-cephfsplugin-nxwb5 3/3 Running 0 7m6s
pod/csi-cephfsplugin-provisioner-b66d48bc8-jvffx 4/4 Running 0 7m6s
pod/csi-cephfsplugin-provisioner-b66d48bc8-k6k6s 4/4 Running 0 7m6s
pod/csi-cephfsplugin-ssrr9 3/3 Running 0 7m6s
pod/csi-rbdplugin-8gdjx 3/3 Running 0 7m6s
pod/csi-rbdplugin-d7pn4 3/3 Running 0 7m6s
pod/csi-rbdplugin-f4cnf 3/3 Running 0 7m6s
pod/csi-rbdplugin-kcn5k 3/3 Running 0 7m6s
pod/csi-rbdplugin-provisioner-95dd85d6-5cmsc 5/5 Running 0 7m6s
pod/csi-rbdplugin-provisioner-95dd85d6-bsvcq 5/5 Running 0 7m6s
pod/rook-ceph-agent-gkf44 1/1 Running 0 7m6s
pod/rook-ceph-agent-kdrrk 1/1 Running 0 7m6s
pod/rook-ceph-agent-rf65d 1/1 Running 0 7m6s
pod/rook-ceph-agent-wxcwn 1/1 Running 0 7m6s
pod/rook-ceph-mon-a-7b596b8467-x9tn7 1/1 Running 0 6m16s
pod/rook-ceph-mon-b-74975dbbd9-fqr5r 1/1 Running 0 6m4s
pod/rook-ceph-mon-c-5fbb68788c-wv2zq 1/1 Running 0 5m44s
pod/rook-ceph-operator-8494966654-jqbb7 1/1 Running 0 8m25s
pod/rook-discover-d45q4 1/1 Running 0 8m2s
pod/rook-discover-k9d89 1/1 Running 0 8m2s
pod/rook-discover-sgtxs 1/1 Running 0 8m2s
pod/rook-discover-xlghm 1/1 Running 0 8m2s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-cephfsplugin-metrics ClusterIP 172.30.234.134 <none> 8080/TCP,8081/TCP 7m2s
service/csi-rbdplugin-metrics ClusterIP 172.30.165.89 <none> 8080/TCP,8081/TCP 7m2s
service/rook-ceph-mon-a ClusterIP 172.30.163.117 <none> 6789/TCP,3300/TCP 6m12s
service/rook-ceph-mon-b ClusterIP 172.30.14.84 <none> 6789/TCP,3300/TCP 6m2s
service/rook-ceph-mon-c ClusterIP 172.30.221.181 <none> 6789/TCP,3300/TCP 5m45s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/csi-cephfsplugin 4 4 4 4 4 <none> 7m3s
daemonset.apps/csi-rbdplugin 4 4 4 4 4 <none> 7m3s
daemonset.apps/rook-ceph-agent 4 4 4 4 4 <none> 7m3s
daemonset.apps/rook-discover 4 4 4 4 4 <none> 7m59s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/csi-cephfsplugin-provisioner 2/2 2 2 7m3s
deployment.apps/csi-rbdplugin-provisioner 2/2 2 2 7m3s
deployment.apps/rook-ceph-mon-a 1/1 1 1 6m13s
deployment.apps/rook-ceph-mon-b 1/1 1 1 6m1s
deployment.apps/rook-ceph-mon-c 1/1 1 1 5m41s
deployment.apps/rook-ceph-operator 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/csi-cephfsplugin-provisioner-b66d48bc8 2 2 2 7m10s
replicaset.apps/csi-rbdplugin-provisioner-95dd85d6 2 2 2 7m10s
replicaset.apps/rook-ceph-mon-a-7b596b8467 1 1 1 6m20s
replicaset.apps/rook-ceph-mon-b-74975dbbd9 1 1 1 6m8s
replicaset.apps/rook-ceph-mon-c-5fbb68788c 1 1 1 5m48s
replicaset.apps/rook-ceph-operator-8494966654 1 1 1 13m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |