Bug 1711200 - Container fails to start with "no space left on device" even there is enough storage available
Summary: Container fails to start with "no space left on device" even there is enough ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.2.0
Assignee: Giuseppe Scrivano
QA Contact: Elvir Kuric
URL:
Whiteboard:
: 1734493 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-17 08:03 UTC by Elvir Kuric
Modified: 2023-09-14 05:28 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: rshared might cause to recursively mount /sys on top of itself Consequence: Container fails to start with "no space left on device" Fix: Prevent that there are recursive /sys mounts on top of each other Result: Containers run correctly with "rshared: true"
Clone Of:
Environment:
Last Closed: 2019-10-16 06:28:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:29:11 UTC

Description Elvir Kuric 2019-05-17 08:03:13 UTC
Description of problem:

Pods fail to start and end with error message "CreateContainerError"


Version-Release number of selected component (if applicable):
# oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.1.0-0.nightly-2019-04-22-005054   True        False         13d     Cluster version is 4.1.0-0.nightly-2019-04-22-005054

# kubectl version
Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+dafd0c6", GitCommit:"dafd0c6", GitTreeState:"clean", BuildDate:"2019-04-10T03:10:23Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+81fc896", GitCommit:"81fc896", GitTreeState:"clean", BuildDate:"2019-04-21T23:18:54Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}


How reproducible:

often


Steps to Reproduce:

I noticed this issue after repetitive actions to create CEPH cluster using rook. I used https://rook.io/docs/rook/master/ceph-csi-drivers.html to create ceph-csi
However I do not think this is ceph related.

Actual results:

Pods fails to start 

Expected results:

pod to start 

Additional info:

failed pods : 

csi-cephfsplugin-27znb                        0/2     CreateContainerError   167        17h
csi-cephfsplugin-q2wmh                        0/2     CreateContainerError   166        17h
csi-cephfsplugin-rwkk7                        0/2     CreateContainerError   169        17h
csi-cephfsplugin-vx9px                        0/2     CreateContainerError   167        17h
csi-rbdplugin-7v5td                           0/2     CreateContainerError   167        17h
csi-rbdplugin-hswff                           0/2     CreateContainerError   169        17h
csi-rbdplugin-kntkh                           0/2     CreateContainerError   167        17h
csi-rbdplugin-zv8qc                           0/2     CreateContainerError   167        17h


--- 

all pods fails with 

Normal   Created  165m (x143 over 17h)   kubelet, ip-10-0-151-35.us-west-2.compute.internal  Created container driver-registrar
  Normal   Pulled   64m (x3205 over 17h)   kubelet, ip-10-0-151-35.us-west-2.compute.internal  Container image "quay.io/cephcsi/cephfsplugin:v1.0.0" already present on machine
  Normal   Pulled   34m (x164 over 17h)    kubelet, ip-10-0-151-35.us-west-2.compute.internal  Container image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2" already present on machine
  Warning  BackOff  19m (x2646 over 17h)   kubelet, ip-10-0-151-35.us-west-2.compute.internal  Back-off restarting failed container
  Warning  Failed   5m1s (x3397 over 17h)  kubelet, ip-10-0-151-35.us-west-2.compute.internal  (combined from similar events): Error: container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys\\\" to rootfs \\\"/var/lib/containers/storage/overlay/051e985771cc69f3f699895a1dada9ef6483e912b46a99e004af7bb4852183eb/merged\\\" at \\\"/var/lib/containers/storage/overlay/051e985771cc69f3f699895a1dada9ef6483e912b46a99e004af7bb4852183eb/merged/sys\\\" caused \\\"no space left on device\\\"\""


--- 
if we connect to node where this pod is scheduled ( eg . ip-10-0-151-35.us-west-2.compute.internal ) , journalctl -f will keep repeating 

May 17 07:40:43 ip-10-0-151-35 su[54231]: pam_unix(su:session): session opened for user root by core(uid=0)
May 17 07:40:43 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:43.511505    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:44 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:44.511802    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:45 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:45.512084    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:46 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:46.512337    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:47 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:47.512573    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:48 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:48.512820    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:49 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:49.513078    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:50 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:50.513345    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:51 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:51.513604    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:52 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:52.513874    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:53 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:53.517652    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:54 ip-10-0-151-35 crio[900]: WARNING: The same type, major and minor should not be used for multiple devices.
May 17 07:40:54 ip-10-0-151-35 crio[900]: WARNING: The same type, major and minor should not be used for multiple devices.
May 17 07:40:54 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:54.518667    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:54 ip-10-0-151-35 systemd[1]: Started crio-conmon-81f4442d4d15ecad66c0328cc603a50a8e341a31eafb3f54a203f60d0d9fc0e7.scope.
May 17 07:40:54 ip-10-0-151-35 systemd[1]: libcontainer-54375-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
May 17 07:40:54 ip-10-0-151-35 systemd[1]: libcontainer-54375-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
May 17 07:40:54 ip-10-0-151-35 systemd[1]: Created slice libcontainer_54375_systemd_test_default.slice.
May 17 07:40:54 ip-10-0-151-35 systemd[1]: Removed slice libcontainer_54375_systemd_test_default.slice.
May 17 07:40:54 ip-10-0-151-35 systemd[1]: libcontainer_54375_systemd_test_default.slice: Consumed 0 CPU time
May 17 07:40:54 ip-10-0-151-35 hyperkube[1270]: W0517 07:40:54.686509    1270 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_54375_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_54375_systemd_test_default.slice: no such file or directory
May 17 07:40:54 ip-10-0-151-35 hyperkube[1270]: W0517 07:40:54.686670    1270 raw.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_54375_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_54375_systemd_test_default.slice: no such file or directory
May 17 07:40:54 ip-10-0-151-35 systemd[1]: Started libcontainer container 81f4442d4d15ecad66c0328cc603a50a8e341a31eafb3f54a203f60d0d9fc0e7.
May 17 07:40:55 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:55.518920    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:55 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:55.867959    1270 file.go:104] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:56 ip-10-0-151-35 systemd[1]: Stopped libcontainer container 81f4442d4d15ecad66c0328cc603a50a8e341a31eafb3f54a203f60d0d9fc0e7.
May 17 07:40:56 ip-10-0-151-35 systemd[1]: crio-81f4442d4d15ecad66c0328cc603a50a8e341a31eafb3f54a203f60d0d9fc0e7.scope: Consumed 1.043s CPU time
May 17 07:40:56 ip-10-0-151-35 crio[900]: time="2019-05-17 07:40:56.169777276Z" level=error msg="Container creation error: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys\\\" to rootfs \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged\\\" at \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged/sys\\\" caused \\\"no space left on device\\\"\""
May 17 07:40:56 ip-10-0-151-35 crio[900]: "
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:56.521870    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
May 17 07:40:56 ip-10-0-151-35 systemd[1]: crio-conmon-81f4442d4d15ecad66c0328cc603a50a8e341a31eafb3f54a203f60d0d9fc0e7.scope: Consumed 320ms CPU time
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:56.987712    1270 remote_runtime.go:191] CreateContainer in sandbox "88ec944e3f49366be9e058e5d6912d889dc8ee5d1a168716c9deb3fb1ddbaa1b" from runtime service failed: rpc error: code = Unknown desc = container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys\\\" to rootfs \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged\\\" at \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged/sys\\\" caused \\\"no space left on device\\\"\""
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:56.987843    1270 kuberuntime_manager.go:748] container start failed: CreateContainerError: container create failed: container_linux.go:336: starting container process caused "process_linux.go:399: container init caused \"rootfs_linux.go:58: mounting \\\"/sys\\\" to rootfs \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged\\\" at \\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged/sys\\\" caused \\\"no space left on device\\\"\""
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:56.987910    1270 pod_workers.go:190] Error syncing pod b6a7e27d-77e7-11e9-93cd-0624567a0966 ("csi-cephfsplugin-vx9px_rook-ceph(b6a7e27d-77e7-11e9-93cd-0624567a0966)"), skipping: [failed to "StartContainer" for "driver-registrar" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=driver-registrar pod=csi-cephfsplugin-vx9px_rook-ceph(b6a7e27d-77e7-11e9-93cd-0624567a0966)"
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: , failed to "StartContainer" for "csi-cephfsplugin" with CreateContainerError: "container create failed: container_linux.go:336: starting container process caused \"process_linux.go:399: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/sys\\\\\\\" to rootfs \\\\\\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged\\\\\\\" at \\\\\\\"/var/lib/containers/storage/overlay/88bb9f9cb827b50f11a19b1cb27dcbac12f3bc117283e8e06c248b8848671ec8/merged/sys\\\\\\\" caused \\\\\\\"no space left on device\\\\\\\"\\\"\"\n"
May 17 07:40:56 ip-10-0-151-35 hyperkube[1270]: ]
May 17 07:40:57 ip-10-0-151-35 hyperkube[1270]: E0517 07:40:57.522141    1270 file_linux.go:61] Unable to read config path "/etc/kubernetes/manifests": path does not exist, ignoring
^C
[root@ip-10-0-151-35 core]# 

However there is enough space on node 

--- 

# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  3.8M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/xvda3       99G   11G   89G  11% /sysroot
/dev/xvda2      976M  136M  774M  15% /boot
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/0105e3a7-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/tuned-token-qlmnt
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/f016da9a-6d91-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/dns-token-6vdpv
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/084fdf12-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/node-ca-token-mxqx9
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/067694f2-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/node-exporter-token-cr8zs
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/084db473-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/machine-config-daemon-token-ffshg
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/f017c65c-6d91-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/multus-token-g9mwf
tmpfs           3.9G  8.0K  3.9G   1% /var/lib/kubelet/pods/067694f2-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/node-exporter-tls
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/ec0105049f805a1913174f693a5a00cac11d55dfa757f71cb2b02d0dbb9192ab/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/d139828f6a422026ddba0078dcabfb3ac4e63a317c683471f62aeb65e077b17b/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/c61e115713c4bdd383b18dfe736a7962f642e8420d3a5a8cf94ee8fbf07e1b51/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/0e3afd041d503c4dafd32389e617986e4abaab609d547d65af1bb1073f8fc422/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/249252322ec76593bbf56ed9a51b279be6a63abfd3c6bec645a88d987e0ee7d6/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/06bb34db-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/telemeter-client-token-f7gxn
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/5e04937bcb80fb896a87958b4940feff6b1c0f835d65d878c4b06f63132336c7/merged
tmpfs           3.9G  8.0K  3.9G   1% /var/lib/kubelet/pods/06bb34db-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/telemeter-client-tls
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/00ce35043e1aef96744ad2c147c5bfe5bf287a561490520315a266953ac01eb1/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/f01c2ad8-6d91-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/sdn-token-jd9xb
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/f01c7812-6d91-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/sdn-token-jd9xb
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/53b813dff21e4992a4dd972436b4eef47a0060186c7c1801e0359d51bb4760e1/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/9c0a5d69f023f9eae277dbe1d9c2415eae78f2831fab000acd6f253b15e0734e/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/7114f1c71d0540059c3ae721294b5aae4e2adeefef05bad39be7f10999a0c99e/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/fd5bbbe9ce2b15bc544d8f8511baa9a7f8b17dee13a0c8f9d39e383cab6475eb/merged
tmpfs           3.9G  8.0K  3.9G   1% /var/lib/kubelet/pods/06bb34db-6d92-11e9-a72d-0a9395330932/volumes/kubernetes.io~secret/secret-telemeter-client
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/03f96f2a04773ada52879639d0048e28c8d2f26c370c1c049bc94b87113edc20/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/804241a55426c800abddf66a1487f7c1f3a4783079c8aa06bdaaab14c7123ed7/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/ecb2e25becec70ddab687573eee1139fa759ffe5e91120215f171c6baf29c6f5/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/97fb7a7ec75da603e8ef55eeeb26e2ddca0c31df697cbf273391afe79b60fb81/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/ae1d808a79da1491bb4c2ae5a2bcd1982fcd56f2bd2ab401970133fc22d918b0/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/4945cb422dbb642888f89ad25fe579ebf73166f387491af2ddcf816cf4d61dc8/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/2d664d4de18268608a68dc9ec3fdaeeb075c91b9872f4842cf0a6902428ae171/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/5db280b734a2d0bae6b7c8342f693e78d5a9636def4ecd83fdd8d0d586485841/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/90e062e4b11a0f1d6fcd50847a7c56898dbd77188380f6bc4bc5ec5d4f84a33f/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/46d23f8f739a126e329df95e448e09a2a4f1467e6b26894b42fd356670ae3c20/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/5c40dbbf10e251d82cc631b9698022521234a162a08af2976179f339d2148208/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/2970a50e9b12f4c982f1a58a67e812632c392287dc9d0f758c2f13aff68b2beb/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/50fb854e6b29d095862d6c9cd3d03cb465441d2fb529b3975648cc9fdc58a93d/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/75f89e147901926953d23d688fc50d2228d1e8b0bfa84eb18f92d3b4ecfbd9c8/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/e054fee9b84600190bc5deea7d8ccb1a13384fd98d24a2def095b019cafee2d9/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/638d4004899b8ba3b4c2be60882d1fe621197d8817f8fd6bc676b87046b04a86/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/a9c311b7b406dfde5116187cdec37c6ba36d251bbd070395af8ec9f10448e467/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/a5d38764a1147cdc3cf7d512a14b5cc9a1433ac865aac55d599ae2cf81b3ff85/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/b9e45eb85fad745851ab78080e1f949d5d245bad1da005ff2fbbae6deda44af0/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/415b4ba2d8e5e074911368b3e4ba9eedf767ded490de62dd569d8f4c8c601b84/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/647c172b843d28f852b2f7a5d9196f47a7de5bf0e41b1cf8282ac5d70d96d514/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/20ecc2121a8ded5b98638f04badd87e3166aba0af9db3ca6683dcf47af4b9f0a/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/b67bbd68-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-ceph-system-token-wl825
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/597800c018f1642625d0c63f80af11eb8c0a1f43d1c1819479bc4f730e6f4a3a/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/b0e5332d41442f8ae305cb649d69cc90e55353efb0d39f274697fc3ae4251ecb/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/b6a62fc3-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-csi-rbd-plugin-sa-token-r5dcc
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/b68a4b5e-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-ceph-system-token-wl825
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/b6a7e27d-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-csi-cephfs-plugin-sa-token-5wvvl
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/ff46553cfde7440ae8de74f3592bf6719bb015757325e67a82ffb784d6ed8c7b/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/771e4908041e0b171bd2f0b06446a80c55855923bf416e4f41b8801910e1e3c2/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/88ec944e3f49366be9e058e5d6912d889dc8ee5d1a168716c9deb3fb1ddbaa1b/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/5000c32e96ae34274d260d343d8049f4a11be4acfdeb53d1a7390954fb94f12a/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/74d160504c5900e34f55d9b421bbb28795a09c26708ae243dbda6ef4b5dd3245/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/f978ef1bf798678a8e453c579d2704cc199ace61a05c57f818f54b0b660ea32a/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/adebc9ebe3220014c1bb80f8cad97bc133718aa80ad3665f58b9b4226a752a15/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/b11a3a4e283c6dea2fc340b7922ff7ffef7d5881619d35dae4d8827c1cff7c3f/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/fb471083-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/default-token-cxlkw
tmpfs           3.9G  4.0K  3.9G   1% /var/lib/kubelet/pods/fb471083-77e7-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-ceph-mons-keyring
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/451939e8955ce54436361b50dd0d43b3738b8211b9ddd27017b4da575c32d342/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/c88d43e0c340e26dcb9d19ab1ccff22b3770fea6086a4e45779a568756ce0ad1/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/ae72357a5b381cefd22fb4c62199ca7da40da9f820eed629a05774e8b47b0d0e/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/ee6b974a020b7bc2f252b5f7b3136277eb9f21eedf1c6a7a531124712a6c9d28/merged
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/89369c15-77e8-11e9-93cd-0624567a0966/volumes/kubernetes.io~secret/rook-ceph-osd-token-d4ph8
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/0958848bc92e76700637649c315fe420e5184d29b2c8acc0415dac463115f492/userdata/shm
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/49863d7a2a3aa344cc74e1c3cc3eb2bce7a5948452f1bfe5909b3738e4b68091/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/3364eecdfe7bae5b8101d41aea932838c1db79be3eec8b535afc7e4e1868503e/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/3b9a6446e5588fabc77e5eed53e485cf274a02e10c9d07f1e1cb109f30783821/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/a26e23860fd20e4ecad2a51b83c92d215d4e926f23f16139fc161211f3edd438/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/aac2126e703754edf2cada20eb25e44bbb4b5413f1c285113c79f0ebfdb0fe80/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/875aa7bd4be5ead2a15c60487d4bced093dd6fe0a98d7336a87521c8fcac5db7/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/74f12b3f17dbbeaf1294696092cb0eb628ccf03755a6a3c9fdd6724db72788f0/merged
overlay          99G   11G   89G  11% /var/lib/containers/storage/overlay/553853a4578a8b178cefb53daf510253c63269832c9519554bd42690db2fc801/merged
tmpfs           798M     0  798M   0% /run/user/1000

Comment 1 Jan Safranek 2019-05-17 13:05:38 UTC
It looks more like container runtime related problem if it can't mount stuff into the containers rootfs.

Comment 2 Urvashi Mohnani 2019-05-17 14:39:42 UTC
What version of cri-o is this on? Anything that is not 1.13.6 or 1.13.9 has some storage issues.

Comment 3 Elvir Kuric 2019-05-17 14:47:48 UTC
(In reply to Urvashi Mohnani from comment #2)
> What version of cri-o is this on? Anything that is not 1.13.6 or 1.13.9 has
> some storage issues.

from : 4.1.0-0.nightly-2019-04-22-005054 
it gives on node 

cri-o-1.13.6-4.rhaos4.1.gita4b40b7.el8.x86_64
criu-3.10-7.el8.x86_64
cri-tools-1.13.0-1.rhaos4.1.gitc06001f.el8.x86_64

Comment 4 Mrunal Patel 2019-05-17 14:50:50 UTC
Can we access the cluster?

Comment 22 Martin Bukatovic 2019-07-31 08:17:52 UTC
*** Bug 1734493 has been marked as a duplicate of this bug. ***

Comment 23 weiwei jiang 2019-08-01 10:03:13 UTC
Hi ekuric, help have a check with cri-o-1.13.10-0.2.dev.rhaos4.1.git3253f9f,
I tried to follow the doc https://rook.io/docs/rook/master/ceph-csi-drivers.html,
but got lots of issues.

Comment 26 Elvir Kuric 2019-08-02 10:45:53 UTC
(In reply to weiwei jiang from comment #23)
> Hi ekuric, help have a check with cri-o-1.13.10-0.2.dev.rhaos4.1.git3253f9f,
> I tried to follow the doc
> https://rook.io/docs/rook/master/ceph-csi-drivers.html,
> but got lots of issues.

I think I do not follow what issue here might be, you have troubles to setup ceph with csi following https://rook.io/docs/rook/master/ceph-csi-drivers.html ?

Comment 31 weiwei jiang 2019-09-09 02:49:07 UTC
Verified on 4.2.0-0.nightly-2019-09-05-234433

> oc get all -n rook-ceph
NAME                                               READY   STATUS    RESTARTS   AGE
pod/csi-cephfsplugin-dxzd7                         3/3     Running   0          7m6s
pod/csi-cephfsplugin-gthb7                         3/3     Running   0          7m6s
pod/csi-cephfsplugin-nxwb5                         3/3     Running   0          7m6s
pod/csi-cephfsplugin-provisioner-b66d48bc8-jvffx   4/4     Running   0          7m6s
pod/csi-cephfsplugin-provisioner-b66d48bc8-k6k6s   4/4     Running   0          7m6s
pod/csi-cephfsplugin-ssrr9                         3/3     Running   0          7m6s
pod/csi-rbdplugin-8gdjx                            3/3     Running   0          7m6s
pod/csi-rbdplugin-d7pn4                            3/3     Running   0          7m6s
pod/csi-rbdplugin-f4cnf                            3/3     Running   0          7m6s
pod/csi-rbdplugin-kcn5k                            3/3     Running   0          7m6s
pod/csi-rbdplugin-provisioner-95dd85d6-5cmsc       5/5     Running   0          7m6s
pod/csi-rbdplugin-provisioner-95dd85d6-bsvcq       5/5     Running   0          7m6s
pod/rook-ceph-agent-gkf44                          1/1     Running   0          7m6s
pod/rook-ceph-agent-kdrrk                          1/1     Running   0          7m6s
pod/rook-ceph-agent-rf65d                          1/1     Running   0          7m6s
pod/rook-ceph-agent-wxcwn                          1/1     Running   0          7m6s
pod/rook-ceph-mon-a-7b596b8467-x9tn7               1/1     Running   0          6m16s
pod/rook-ceph-mon-b-74975dbbd9-fqr5r               1/1     Running   0          6m4s
pod/rook-ceph-mon-c-5fbb68788c-wv2zq               1/1     Running   0          5m44s
pod/rook-ceph-operator-8494966654-jqbb7            1/1     Running   0          8m25s
pod/rook-discover-d45q4                            1/1     Running   0          8m2s
pod/rook-discover-k9d89                            1/1     Running   0          8m2s
pod/rook-discover-sgtxs                            1/1     Running   0          8m2s
pod/rook-discover-xlghm                            1/1     Running   0          8m2s

NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/csi-cephfsplugin-metrics   ClusterIP   172.30.234.134   <none>        8080/TCP,8081/TCP   7m2s
service/csi-rbdplugin-metrics      ClusterIP   172.30.165.89    <none>        8080/TCP,8081/TCP   7m2s
service/rook-ceph-mon-a            ClusterIP   172.30.163.117   <none>        6789/TCP,3300/TCP   6m12s
service/rook-ceph-mon-b            ClusterIP   172.30.14.84     <none>        6789/TCP,3300/TCP   6m2s
service/rook-ceph-mon-c            ClusterIP   172.30.221.181   <none>        6789/TCP,3300/TCP   5m45s

NAME                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/csi-cephfsplugin   4         4         4       4            4           <none>          7m3s
daemonset.apps/csi-rbdplugin      4         4         4       4            4           <none>          7m3s
daemonset.apps/rook-ceph-agent    4         4         4       4            4           <none>          7m3s
daemonset.apps/rook-discover      4         4         4       4            4           <none>          7m59s

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/csi-cephfsplugin-provisioner   2/2     2            2           7m3s
deployment.apps/csi-rbdplugin-provisioner      2/2     2            2           7m3s
deployment.apps/rook-ceph-mon-a                1/1     1            1           6m13s
deployment.apps/rook-ceph-mon-b                1/1     1            1           6m1s
deployment.apps/rook-ceph-mon-c                1/1     1            1           5m41s
deployment.apps/rook-ceph-operator             1/1     1            1           13m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/csi-cephfsplugin-provisioner-b66d48bc8   2         2         2       7m10s
replicaset.apps/csi-rbdplugin-provisioner-95dd85d6       2         2         2       7m10s
replicaset.apps/rook-ceph-mon-a-7b596b8467               1         1         1       6m20s
replicaset.apps/rook-ceph-mon-b-74975dbbd9               1         1         1       6m8s
replicaset.apps/rook-ceph-mon-c-5fbb68788c               1         1         1       5m48s
replicaset.apps/rook-ceph-operator-8494966654            1         1         1       13m

Comment 33 errata-xmlrpc 2019-10-16 06:28:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922

Comment 34 Red Hat Bugzilla 2023-09-14 05:28:46 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.