Bug 1997062 - crio-o: "no space left on device" issue is seen on latest 4.9 builds
Summary: crio-o: "no space left on device" issue is seen on latest 4.9 builds
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Release
Version: 4.9
Hardware: ppc64le
OS: Unspecified
high
high
Target Milestone: ---
: 4.9.0
Assignee: Peter Hunt
QA Contact: Manoj Kumar
URL:
Whiteboard: UpdateRecommendationsBlocked
Depends On:
Blocks: 1999645 2000155 2000164
TreeView+ depends on / blocked
 
Reported: 2021-08-24 11:03 UTC by Alisha
Modified: 2023-09-09 00:09 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1999645 (view as bug list)
Environment:
Last Closed: 2021-10-18 17:48:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Content of .journal file (8.24 MB, text/plain)
2021-08-30 07:39 UTC, Alisha
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github cri-o cri-o pull 5245 0 None None None 2021-08-30 17:28:15 UTC
Red Hat Issue Tracker INSIGHTOCP-454 0 None None None 2021-09-01 15:14:31 UTC
Red Hat Knowledge Base (Solution) 6304881 0 None None None 2021-09-02 21:08:40 UTC
Red Hat Product Errata RHSA-2021:3759 0 None None None 2021-10-18 17:48:37 UTC

Description Alisha 2021-08-24 11:03:10 UTC
Issue is with below builds : 
4.9.0-0.nightly-ppc64le-2021-08-17-145337
4.9.0-0.nightly-ppc64le-2021-08-19-120135

on bastion : 
# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Model:               2.3 (pvr 004e 0203)
Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   pHyp
Virtualization type: para
L1d cache:           32K
L1i cache:           32K
NUMA node0 CPU(s):   0-7
Physical sockets:    2
Physical chips:      1
Physical cores/chip: 10

[core@master-0 ~]$ lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Model:               2.3 (pvr 004e 0203)
Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   pHyp
Virtualization type: para
L1d cache:           32K
L1i cache:           32K
NUMA node0 CPU(s):   0-7

No workload was deployed on the cluster.

# oc get co
NAME                                       VERSION                                     AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.9.0-0.nightly-ppc64le-2021-08-17-145337   False       False         True       4d5h    OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.pravin-49proxy.redhat.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)...
baremetal                                  4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
cloud-controller-manager                   4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
cloud-credential                           4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
cluster-autoscaler                         4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
config-operator                            4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
console                                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   False       False         False      4d6h    RouteHealthAvailable: console route is not admitted
csi-snapshot-controller                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        True          False      3d21h   Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods
dns                                        4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
etcd                                       4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
image-registry                             4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d15h
ingress                                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
insights                                   4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
kube-apiserver                             4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         True       6d16h   NodeControllerDegraded: The master nodes not ready: node "master-2" not ready since 2021-08-23 09:51:21 +0000 UTC because KubeletNotReady (container runtime is down)...
kube-controller-manager                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         True       6d16h   StaticPodsDegraded: pod/kube-controller-manager-master-0 container "kube-controller-manager" is waiting: CreateContainerError: open /var/run/containers/storage/overlay-layers/.tmp-mountpoints.json632783936: no space left on device...
kube-scheduler                             4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         True       6d16h   StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container "kube-scheduler" is waiting: CreateContainerError: open /var/run/containers/storage/overlay-layers/.tmp-mountpoints.json574670570: no space left on device...
kube-storage-version-migrator              4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      4d10h
machine-api                                4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
machine-approver                           4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
machine-config                             4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      4d5h
marketplace                                4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
monitoring                                 4.9.0-0.nightly-ppc64le-2021-08-17-145337   False       True          True       4d22h   Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
network                                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        True          True       6d16h   DaemonSet "openshift-sdn/sdn" rollout is not making progress - last change 2021-08-20T11:46:25Z
node-tuning                                4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
openshift-apiserver                        4.9.0-0.nightly-ppc64le-2021-08-17-145337   False       False         False      18h     APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.84:8443/apis/apps.openshift.io/v1: bad status from https://10.129.0.84:8443/apis/apps.openshift.io/v1: 401...
openshift-controller-manager               4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
openshift-samples                          4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
operator-lifecycle-manager                 4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
operator-lifecycle-manager-catalog         4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
operator-lifecycle-manager-packageserver   4.9.0-0.nightly-ppc64le-2021-08-17-145337   False       True          False      3d21h   ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout
service-ca                                 4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h
storage                                    4.9.0-0.nightly-ppc64le-2021-08-17-145337   True        False         False      6d16h

# oc get nodes
NAME       STATUS   ROLES    AGE     VERSION
master-0   Ready    master   6d17h   v1.22.0-rc.0+3dfed96
master-1   Ready    master   6d17h   v1.22.0-rc.0+3dfed96
master-2   Ready    master   6d17h   v1.22.0-rc.0+3dfed96
worker-0   Ready    worker   6d17h   v1.22.0-rc.0+3dfed96
worker-1   Ready    worker   6d17h   v1.22.0-rc.0+3dfed96

# oc adm top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

# oc get pods -A -o wide | grep -v "Running" | grep -v "Completed"
NAMESPACE                                          NAME                                                        READY   STATUS                 RESTARTS         AGE     IP             NODE       NOMINATED NODE   READINESS GATES
nfs-provisioner                                    nfs-client-provisioner-84766f5b6d-6pscv                     0/1     CreateContainerError   0 (4d19h ago)    6d18h   10.128.2.9     worker-1   <none>           <none>
openshift-apiserver                                apiserver-9fc69858-9tkhg                                    1/2     CreateContainerError   11 (43h ago)     6d17h   10.128.0.47    master-0   <none>           <none>
openshift-apiserver                                apiserver-9fc69858-p9x8t                                    1/2     CreateContainerError   6 (25h ago)      6d18h   10.130.0.32    master-2   <none>           <none>
openshift-authentication-operator                  authentication-operator-656f485cc6-8rtk8                    0/1     CreateContainerError   6 (3d23h ago)    4d8h    10.130.0.62    master-2   <none>           <none>
openshift-authentication                           oauth-openshift-54bcbd5487-nhkcv                            0/1     CreateContainerError   1 (44h ago)      4d8h    10.130.0.70    master-2   <none>           <none>
openshift-cluster-storage-operator                 csi-snapshot-controller-74cbcb446f-cf572                    0/1     CreateContainerError   6 (3d23h ago)    4d8h    10.130.0.60    master-2   <none>           <none>
openshift-cluster-storage-operator                 csi-snapshot-controller-74cbcb446f-hz6wr                    0/1     CreateContainerError   80 (3d23h ago)   6d18h   10.128.0.4     master-0   <none>           <none>
openshift-cluster-version                          cluster-version-operator-cdd5f5c56-t6d2j                    0/1     CreateContainerError   7 (43h ago)      6d18h   9.47.87.104    master-0   <none>           <none>
openshift-config-operator                          openshift-config-operator-58684b8cc5-7k658                  0/1     CreateContainerError   6 (43h ago)      4d8h    10.128.0.69    master-0   <none>           <none>
openshift-console-operator                         console-operator-7b4ddbcf8d-2h4vw                           0/1     CreateContainerError   53 (4d8h ago)    6d18h   10.130.0.22    master-2   <none>           <none>
openshift-console                                  console-649f685c96-g7h96                                    0/1     CreateContainerError   4 (44h ago)      6d18h   10.130.0.24    master-2   <none>           <none>
openshift-console                                  console-649f685c96-jzgx7                                    0/1     CreateContainerError   3 (43h ago)      6d18h   10.128.0.39    master-0   <none>           <none>
openshift-image-registry                           image-pruner-27156960--1-pwn2z                              0/1     ContainerCreating      0                4d10h   <none>         worker-1   <none>           <none>
openshift-kube-apiserver-operator                  kube-apiserver-operator-54cbc7b859-7jg5w                    0/1     CreateContainerError   2 (25h ago)      4d8h    10.130.0.68    master-2   <none>           <none>
openshift-kube-apiserver                           kube-apiserver-master-0                                     4/5     CreateContainerError   13 (43h ago)     5d18h   9.47.87.104    master-0   <none>           <none>
openshift-kube-apiserver                           kube-apiserver-master-2                                     4/5     CreateContainerError   6 (4d7h ago)     5d18h   9.47.87.106    master-2   <none>           <none>
openshift-kube-controller-manager                  kube-controller-manager-master-0                            3/4     CreateContainerError   88 (3d23h ago)   6d18h   9.47.87.104    master-0   <none>           <none>
openshift-kube-controller-manager                  kube-controller-manager-master-1                            2/4     CreateContainerError   108 (4d ago)     6d18h   9.47.87.110    master-1   <none>           <none>
openshift-kube-controller-manager                  kube-controller-manager-master-2                            3/4     CreateContainerError   74 (4d7h ago)    6d18h   9.47.87.106    master-2   <none>           <none>
openshift-kube-scheduler                           openshift-kube-scheduler-master-0                           2/3     CreateContainerError   67 (2d10h ago)   6d18h   9.47.87.104    master-0   <none>           <none>
openshift-kube-scheduler                           openshift-kube-scheduler-master-1                           2/3     CreateContainerError   59 (4d3h ago)    6d18h   9.47.87.110    master-1   <none>           <none>
openshift-marketplace                              certified-operators-k7r4p                                   0/1     ImageInspectError      0 (5d1h ago)     5d2h    10.128.3.108   worker-1   <none>           <none>
openshift-marketplace                              certified-operators-l7gv2                                   0/1     ImageInspectError      0                5d1h    10.128.3.126   worker-1   <none>           <none>
openshift-marketplace                              community-operators-656xf                                   0/1     ImageInspectError      0 (5d1h ago)     5d1h    10.128.3.118   worker-1   <none>           <none>
openshift-marketplace                              community-operators-znvc2                                   0/1     ImageInspectError      0                5d1h    10.128.3.127   worker-1   <none>           <none>
openshift-marketplace                              marketplace-operator-54c6798b65-77vwn                       0/1     CreateContainerError   6 (3d23h ago)    4d8h    10.128.0.62    master-0   <none>           <none>
openshift-marketplace                              redhat-marketplace-h4rdg                                    0/1     ContainerCreating      0                5d1h    <none>         worker-1   <none>           <none>
openshift-marketplace                              redhat-marketplace-zk6tn                                    0/1     ImageInspectError      0 (5d1h ago)     6d14h   10.128.2.87    worker-1   <none>           <none>
openshift-marketplace                              redhat-operators-9k5dj                                      0/1     ImageInspectError      0 (5d1h ago)     5d1h    10.128.3.116   worker-1   <none>           <none>
openshift-marketplace                              redhat-operators-9ldp7                                      0/1     ImageInspectError      0                5d1h    10.128.3.124   worker-1   <none>           <none>
openshift-monitoring                               alertmanager-main-1                                         0/5     ContainerCreating      0                4d7h    <none>         worker-1   <none>           <none>
openshift-monitoring                               grafana-8657cf6659-dx97s                                    0/2     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-monitoring                               kube-state-metrics-7b6988cf78-7x8sw                         0/3     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-monitoring                               openshift-state-metrics-5785f48976-nlzv5                    0/3     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-monitoring                               prometheus-adapter-5f6ff5849b-r79hm                         0/1     ContainerCreating      0                4d6h    <none>         worker-0   <none>           <none>
openshift-monitoring                               prometheus-adapter-6bcc5ddf97-krzsn                         0/1     Pending                0                3d23h   <none>         <none>     <none>           <none>
openshift-monitoring                               prometheus-adapter-6bcc5ddf97-qq7df                         0/1     ContainerCreating      0                3d23h   <none>         worker-1   <none>           <none>
openshift-monitoring                               prometheus-k8s-0                                            6/7     CreateContainerError   0 (3d14h ago)    4d7h    10.131.0.249   worker-0   <none>           <none>
openshift-monitoring                               prometheus-k8s-1                                            0/7     Init:0/1               0                4d7h    <none>         worker-1   <none>           <none>
openshift-monitoring                               telemeter-client-7b954d746b-fspb5                           0/3     ContainerCreating      0                4d9h    <none>         worker-1   <none>           <none>
openshift-network-diagnostics                      network-check-source-dfbc58d6f-mb2b5                        0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-oauth-apiserver                          apiserver-6d5b547944-r2njv                                  0/1     CreateContainerError   3 (45h ago)      4d8h    10.129.0.85    master-1   <none>           <none>
openshift-oauth-apiserver                          apiserver-6d5b547944-zqnh6                                  0/1     CreateContainerError   7 (4d7h ago)     6d18h   10.130.0.9     master-2   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156585--1-7gnj8                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156600--1-bfwnf                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156615--1-hvhzf                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156630--1-bm8dh                          0/1     ContainerCreating      0                4d16h   <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156645--1-nqcsj                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156660--1-gckl4                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156675--1-d2hvq                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156690--1-n6pbn                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156705--1-flx77                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156720--1-ml56f                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156735--1-6brkc                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156750--1-9q4k2                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156765--1-kwch8                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156780--1-8vkj5                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156795--1-qg8fw                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156810--1-hz9mm                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156825--1-w2wvk                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156840--1-mv2fd                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156855--1-fpgrn                          0/1     ContainerCreating      0                4d12h   <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156870--1-6c7ck                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156885--1-z58sv                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156900--1-q885s                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156915--1-78mhb                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156930--1-898zx                          0/1     ContainerCreating      0                4d11h   <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156945--1-5grj9                          0/1     ContainerCreating      0                4d11h   <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156960--1-q77nv                          0/1     ContainerCreating      0                4d10h   <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156975--1-tqr9j                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27156990--1-87mdj                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157005--1-rcr7m                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157020--1-x6jdv                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157035--1-5mhjc                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157050--1-8nbjz                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157065--1-xfvms                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157080--1-g5f9q                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157095--1-tcch5                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157110--1-jh8x5                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157125--1-lfcl4                          0/1     ContainerCreating      0                4d8h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157140--1-pfb9r                          0/1     ContainerCreating      0                4d7h    <none>         worker-1   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157185--1-c99ph                          0/1     ContainerCreating      0                4d7h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157200--1-vc9kr                          0/1     ContainerCreating      0                4d6h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157215--1-7qxjj                          0/1     ContainerCreating      0                4d6h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157230--1-bmr9s                          0/1     ContainerCreating      0                4d6h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157245--1-8sjsd                          0/1     ContainerCreating      0                4d6h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157260--1-vrchv                          0/1     ContainerCreating      0                4d5h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157275--1-tdq9c                          0/1     ContainerCreating      0                4d5h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157290--1-v782d                          0/1     ContainerCreating      0                4d5h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157305--1-5d5zb                          0/1     ContainerCreating      0                4d5h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157320--1-jwh24                          0/1     ContainerCreating      0                4d4h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157335--1-gltgf                          0/1     ContainerCreating      0                4d4h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157350--1-jbmhc                          0/1     ContainerCreating      0                4d4h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157365--1-jbmn8                          0/1     ContainerCreating      0                4d4h    <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               collect-profiles-27157665--1-4spth                          0/1     ContainerCreating      0                3d23h   <none>         worker-0   <none>           <none>
openshift-operator-lifecycle-manager               olm-operator-75dcc9cb6-8nt5n                                0/1     CreateContainerError   4 (43h ago)      4d8h    10.128.0.65    master-0   <none>           <none>
openshift-operator-lifecycle-manager               package-server-manager-7b7646568c-gkx7k                     0/1     CreateContainerError   6 (4d7h ago)     4d8h    10.130.0.55    master-2   <none>           <none>
openshift-operator-lifecycle-manager               packageserver-76694fb586-9pph6                              0/1     CreateContainerError   7 (43h ago)      4d8h    10.128.0.68    master-0   <none>           <none>

Comment 1 Alisha 2021-08-24 13:10:32 UTC
[root@master-0 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G     0  7.9G   0% /dev
tmpfs           8.0G  256K  8.0G   1% /dev/shm
tmpfs           8.0G  7.9G  151M  99% /run
tmpfs           8.0G     0  8.0G   0% /sys/fs/cgroup
/dev/sda4       120G   17G  104G  14% /sysroot
tmpfs           8.0G   64K  8.0G   1% /tmp
/dev/sdb3       364M  233M  109M  69% /boot
overlay         8.0G  7.9G  151M  99% /etc/NetworkManager/systemConnectionsMerged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/bdfb85ba-24e7-43ce-b9c3-5c60c3c9669a/volumes/kubernetes.io~projected/kube-api-access-f6vv7
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/11ccb899-73cb-489c-b8f3-aa33a2625c8d/volumes/kubernetes.io~projected/kube-api-access-lzq5s
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/d146ea13-e8ce-49bc-85ad-57a5ed0e652d/volumes/kubernetes.io~projected/kube-api-access-xgvcn
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/d377af6e04a58c9a80c8c0717342a303c08670c5e0cc0f41328ad6b2ddd298ad/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/363a5a988d9fc0c1529bc651dc81b34ed0c388a936a92039467280db1d1be88f/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/44485d567266773853d769b527885816b8b3a1a977286f4c2856256859e26fa0/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4ebed2b3fdc214373aaedbfde7aadf3e78e115090fbc5233dc4de5a9bccd514b/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/159aa25b-c750-4b2b-ad2e-10693ca53127/volumes/kubernetes.io~projected/kube-api-access-2fb9x
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/7bfa396e-3ea9-4c20-8b43-a3bb51f8f641/volumes/kubernetes.io~projected/kube-api-access-25d8k
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/955e2b9b0ce36cd4738e347131beb0bf7be9423947e5712e3c402885be53d909/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b766b0cedf2fef1f27d71b92a3adf0e986a94d10051c7b82fe49322430afc195/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/5d5ba79f040f6afc2ba582db6dbdfbcc7edd2992fd4227932c056073d1daa890/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4c76b73d982a7f0521ab0c01817e163332646d11b3d095abc5109d15a9925a4a/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/71b947dd-2538-4a51-85e1-574d95737d86/volumes/kubernetes.io~projected/kube-api-access-fnxrk
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/23ffe731d98acf25c11542241bbd32a3414007a1720eebd5c459bdf02a6eceda/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/bffdefd657991da2d0e371c503b15b8108dc4678a0d43bbd59456432950fb370/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/9eaacf53-65e7-40e7-b2dd-a63a02628189/volumes/kubernetes.io~projected/kube-api-access-xvqzg
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/c8ce35cb-071d-492c-8d17-99a4dae7fe31/volumes/kubernetes.io~secret/sdn-metrics-certs
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/c8ce35cb-071d-492c-8d17-99a4dae7fe31/volumes/kubernetes.io~projected/kube-api-access-6zrff
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/3461bcfae511eb2b540880ec881646a1a06cf54f68861646e5b24374244674d8/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a4261b0978f774be727ce36dd694c17e4cca581ac83d853673fd9a969ddcbc50/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/246eb46b07a6bf7a79e39f4d0627c1c716357079a1a3e4c061f64470b17b10b3/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/d74089f8d7ed55080b577a5db1f43acc63cb812ad4824ede346beb3292475e98/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/35585c2b-1aaa-47c4-b1ce-c646acedeacb/volumes/kubernetes.io~projected/kube-api-access-4qqr8
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/1455f1c77d1567a5575d00afb8b87c2296afcb5c4ad56be0dc8e58bfbf82d05c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/0fe411f8ffda89a751bfd9f67ac3889c190d5fa2438a8b99a6c60ea33246ba66/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8437c2dba678b402c382d7df5b81d182b1a3878b883d266f937dd22917c43ed9/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/7b12e1c6436da8d8763227fe649c585b1912b89002b762a50770463ecdf12342/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/28754d2d25cd9e871393f628afaba58936902e0cade9feb0a319ae46d747523d/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/14c1aa138f1015a7e13c54ccd007eba1cc6f3c17e6d77906e864f2cbf7adfe4d/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/11ad5783105b25cc88b4bb35a9f70562f584503e5397dc44ad198d871d95882a/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/577520db1ce12b1d4124d3c46cb6621aecd772ef9143a1ed69da059b2538e69d/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a702f92f7c42a1596c20808bb8d5562386c938cd166e2126fc9824bbeb626115/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8b06a88b188b2917a7906a6161c289e200fc50c6b591033cf618e4b202a87575/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/52460172-1449-44b1-939d-66c45d794d29/volumes/kubernetes.io~projected/kube-api-access-kxccz
tmpfs            15G   64K   15G   1% /var/lib/kubelet/pods/1462887a-d5e4-4351-a25f-819261fdc0fa/volumes/kubernetes.io~secret/cookie-secret
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/1462887a-d5e4-4351-a25f-819261fdc0fa/volumes/kubernetes.io~projected/kube-api-access-qncz9
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/597b3475-78e2-45eb-9ff5-58f1d636cdf2/volumes/kubernetes.io~projected/kube-api-access-m2vbm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/264027c1-88e3-45f4-ab59-00dcf1ea23a9/volumes/kubernetes.io~projected/kube-api-access-s4stb
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/87b10f0163ad881a169de41bb2368618cb065aa973fb0017e3e64df7babaf07b/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/cf31af7e-20e7-4bfc-b72f-03490778d033/volumes/kubernetes.io~projected/kube-api-access-9r7zp
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/7c9a09928efd9fd182251d6d99cba1857bbac10a6cc7a06eba372714560b7318/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/422074682e6ce253cc460e576e20d9eda6f8e8993a091411b6ef9fc1e4d41519/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/8b7a583a-1708-47b2-9b0b-a6f6e8654f3c/volumes/kubernetes.io~projected/kube-api-access-9749n
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/4ee7a4973c861b3818d29968d7458ec5a3441116ee1e773bef60cccaa8299ae0/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/b7b09c30-6c3a-49fd-bd88-32da6920906a/volumes/kubernetes.io~projected/kube-api-access-jps9k
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8a0a9788f590d99e608bed0765093d96a03d892cfac31a6ee31b786b8b4b072b/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/b7b09c30-6c3a-49fd-bd88-32da6920906a/volumes/kubernetes.io~secret/etcd-client
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/be6d99eebca40c444781a1193c4573457780d5daa71af6cebcbe032844f285c1/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/bb32a5a55d7c1dfc870a22b8bd727d6c255e183c505913911d87b258e29ae9db/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/1e2c6944613074b1c0337577e368b94fd23f9c745c69aac68fae550b81e7beec/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/597b3475-78e2-45eb-9ff5-58f1d636cdf2/volumes/kubernetes.io~secret/prometheus-operator-tls
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/2d176679f11fe94476ae277fa98bad58f8bfe88385c8eb944b7e12c4172d8140/userdata/shm
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/cf31af7e-20e7-4bfc-b72f-03490778d033/volumes/kubernetes.io~secret/certs
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/52460172-1449-44b1-939d-66c45d794d29/volumes/kubernetes.io~secret/webhook-certs
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4364916568d74d487f7e3db22de0060fe1dcbee8103c6a5fd121772b286cac7a/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/b718177cdbf4f36b41309f4665f05b68e0b7488674b15627bfcb5420ba1ac209/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/e6d0c8d7bad2960d6a75ade4cf388fd9a9d87e49f7d9f9a97b20647728021bc3/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/9ed437034cf01ed2e7041420da5542172090f85e01473559ac6873a5d3495aa2/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/bb2e8160fe78fbceb70842fb57449d684dba2dbce7928b4b5b64b76642b42af5/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2191129c6eb07104d7a7f269379aa79d8d31195b6f8c56b3d6af78751a1d1d4b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5b087f59a8f978138c6b2001d4bf7ed22eb89458d1e589efd3faebd9a054384c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8435500c0409cf38c65cb328faf91824dfcfe156763b995c22536bb0a9e72b5a/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/0c8199464e8f972622e18882ffba70384e021a4326588ecf447c1346cf803403/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/c45e91019b8c7fb327459b225c9fa547854673eacd196faa1760434303b15d67/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/604a99e0ee98836d187486cefc176f6bf380d480bbcb2ac2875d849ed16f1bbe/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b8c71b49999a9a5aa40e88d73d2d39cf2900b0f59e0828d1103b92767251d14c/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/1462887a-d5e4-4351-a25f-819261fdc0fa/volumes/kubernetes.io~secret/proxy-tls
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/71741a1d015090c170e594efc313d3d56e015e4453a619391e0422d4ba391989/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/6d82dfecadb4e2b251d17a48ba4ac8c2228bbe268b5d2e7792c3ab9eb9ea8645/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4bd51cd1bf5f613031766f6292c0dde5832f647de3ded4a5c5a5987baeddad3f/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/debe01bb2b1d22fe17e05173a224771da58b14dd045e69e11bf16c38d839305f/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f68f096d0f26ca96a06837a40b9c007da32834c2aa9992a5a323ed768b041af6/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/e6ca5d21-13fe-40f3-ae7d-2e39f55d39d8/volumes/kubernetes.io~projected/kube-api-access-r9ckd
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/e6ca5d21-13fe-40f3-ae7d-2e39f55d39d8/volumes/kubernetes.io~secret/node-exporter-tls
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/d146ea13-e8ce-49bc-85ad-57a5ed0e652d/volumes/kubernetes.io~secret/serving-cert
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/2c88568acd46744465e17ea914036b9f4e6eebb304f6113c6bfdc8dd47ea2ef1/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/9b51e4f3823fb2d1089bbde721293076babd7f75bd28ee16f4ae7b801278acd1/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/dc31f2fe7b1f0ca070d7af0a91e78534f0cced83d5a7527235bc979a2af8af80/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/deef2458a3f83a4e04cb7eb466300baffce5c146301ac02e1c9b4b38cd5c61bc/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/d3bb24a744d9a6a210c778990b7943a836571fe1a3300a9a70ff358a67d5c151/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/68c4044d8640bddc50ad549846328068cfd8a7411d0107cd70b1ed29255cb053/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/eeac4539b76bf78ad5292d365a97d1fedf8b1975c5f914bbaffa688673a016cd/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/5144c896-68ec-49ef-970b-0dfe93b07230/volumes/kubernetes.io~projected/kube-api-access-w8qkj
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2eb2e3f3c6d16c01e2e14f50fc583cd85e5959ce168af4c964608dfb7ee1a7d2/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/71b947dd-2538-4a51-85e1-574d95737d86/volumes/kubernetes.io~secret/metrics-certs
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/16ac7eff58385fa96458f451a7ad3c78554348e61eb1f77bd53004303a523073/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/035193f0-e425-43c5-8731-0a0dc3198ff2/volumes/kubernetes.io~projected/kube-api-access-vhj8q
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/712354da-4ba0-41bf-9204-1d73fb56b0c4/volumes/kubernetes.io~projected/kube-api-access-sx9sd
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/035193f0-e425-43c5-8731-0a0dc3198ff2/volumes/kubernetes.io~secret/metrics-tls
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/9a904603-969a-453a-b253-0260499e548c/volumes/kubernetes.io~projected/kube-api-access-vkzrl
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f6094266644a3c56fcc03bac5084971a348bd8d9be295091c4994eb65219ccd5/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/1fdee263aa710d35172ba292d7469b043a2b4aa7ee5b472346c368a09b25539e/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/0d4ea08e8d3ba8514d012cab77a5a8c5456ff80b08ab22cc0bba0fa53ed82db4/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/fb1ae63b3c9bb9d53658cf345cabab0a7aa648d7d22d20b4cc2b7707b15f5be5/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/ba23bb184d211c6fa1c19d6b4cd7e02181856fd8484a0ce2bfa1ed21716e6434/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/d92df05677a5c021db8b78e379a5f4643668d0d29dd0e574829f67521ce8e686/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/3ddc54f4966c180ef02eb3d24bbf4b738bb6686f7bd2e78720ff04fc5dcbdccc/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/6615919ae31a4694aacd70e4e3764b0bcf20f3eb4f5d93947d58dc2596acdb1b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2a34aeefd85fd4bedab705b93b2cfec630afb8b695bb2ba8c046af23ff530326/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/ee195cb8d6d4ce5a6bbaed93dc0f6c716f2f3742b3766eee9456187734c23c38/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/54d1307916a3ee47428261637f4b3e019f401a3e25b7dbe996b9266037ea1974/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/643e54d48ee05b1a990346b604d02c98157607727082f4049e43bc32d39f1258/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/af8e8df2f230ff92e06ac5bf53f0fd4fcf4edc079e729c0966aa203e1cf598f6/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/648bf02a-db73-402e-a0ee-75a96c9320c0/volumes/kubernetes.io~secret/certs
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/648bf02a-db73-402e-a0ee-75a96c9320c0/volumes/kubernetes.io~secret/node-bootstrap-token
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/648bf02a-db73-402e-a0ee-75a96c9320c0/volumes/kubernetes.io~projected/kube-api-access-qqvbj
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/89d79d70c51d54bbecb286738e307367ed4108ebac79dad3050b0bdd7699243b/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/9d2caacd02661d4f1f9523c986079fd0810f519c61af19a91397beef4af198d7/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f2fbb5210c3ac090d5837ade1b6a38340041f72c01927f916f71e9fc9577d61a/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/512a966052bd0bb06aeda04e6f3d99d6866af9347b0e93b0048c41ecc194a557/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/e8fb605cb6e86d6bd0430fd86746d942e8460fbf474db898feffa480c7b3a9c3/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/3cf6965e-984f-4bd3-8339-bf04dee44825/volumes/kubernetes.io~secret/etcd-client
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/3cf6965e-984f-4bd3-8339-bf04dee44825/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G     0   15G   0% /var/lib/kubelet/pods/3cf6965e-984f-4bd3-8339-bf04dee44825/volumes/kubernetes.io~secret/encryption-config
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/3cf6965e-984f-4bd3-8339-bf04dee44825/volumes/kubernetes.io~projected/kube-api-access-mbvfj
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/9c5f3db4be82151e89af5fbb938343596d27f48d2f0f5e4f68b5ee8f70525226/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2bf1b8dde0c80013bbb25899444c01ea9c42440ebacc0bce98195b573d40f466/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/13553a1709275678b442cc97c8ffcc04c5fedd6c4adf91f25e6fdc23c8467944/merged
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/7a55cabd-0e46-42af-91c6-5c3ba72fccef/volumes/kubernetes.io~projected/kube-api-access-wpv6c
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/735b02248120c57cb9972289b180a015160e5ef5e2ecde7efd2879618a436584/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5347813ed4fd974f910bcee624e98f9da4723fa53de96874c52d63583029163b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8fd98f96bab35a0ac36674f17d5d6281a1d0226753fb2e0264df78924d7cdfba/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/354634dfcbdd9babcd9b74106aa21c4118a9fe7586e7829d04e931e5ae5a0c2a/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/079eb8700cb53d61eaaca0d0360a622135717815c8927ed895c68d85e51c73ef/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/a42d82272bdbcad9fd0c9effe146e124a5af6799f7024b3f6d85f77ddfe6acbf/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/aed5e2f54a47a1d68241be2e80c1b0d88846c331adc0e34dc9277e25817bd347/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5c8c934f3618638a8e8dcd5c1c7ff4b1727a4624beff05bc70f2f3a5db92423c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4c9b916a67c848721a460f196332ab149d7faab63628525c132a1b1b1beb5224/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/35f1a33abf33efee106f25d7e4800bde8bc5d2702e7dbdfa77c9a4f7f817a910/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/11654a848f69a6bd4510857c3580eb2c1909a784307631b76cf8861691b474e0/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8fca976b76b7af09cb9f838f29cde5dc0da6923e289ce788064e8dd83d5f1876/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/9e9afaa5b6da0dc036b6795c82626b0842f4ecd36096d03b6620142eabff5567/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/fe3e551263b3eef3a4e09f1e9ed5cdb3d668af1bd35e2a442af40b4e29ed9221/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/07cf66c2a60efa39286569421c95a38841e68b3c378d15ccbf362bdc5b85b05c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a4b209a6e3040a8abc5417e41c0276a288a1559fc714e04688bd238a5a4832c9/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/be0df92b202a7efef91e2116b990d6e263534e9f030718291e0eaeee0d6a56b6/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/c580c26244ef801b257ffcc6f544ea58d4669c78512192290a47e8cc96834815/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b169bed621bd457baaeb4c0cfcd27a6cf09c6871996218c64d8d3ba77c2f9473/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/0cc91e6fcadd1f4859b3c1ae02d59257b0230507bb4ba392b28ae0cfb281757a/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/071aa340cdda57c663cbd10bce3a977894694865fc44d28a41db060502837766/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/7aae6e1a76d7d2a548410c262776e9ceb4995172d27bce5021cec6dbaa73763d/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/9a75c7bb-4c75-455d-b005-b25c05ce323d/volumes/kubernetes.io~secret/console-serving-cert
tmpfs            15G   64K   15G   1% /var/lib/kubelet/pods/9a75c7bb-4c75-455d-b005-b25c05ce323d/volumes/kubernetes.io~secret/console-oauth-config
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/9a75c7bb-4c75-455d-b005-b25c05ce323d/volumes/kubernetes.io~projected/kube-api-access-ld8d5
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/759b0e66ffed1e82dcd685463b2f8374a7bd79c9e0a79ed562283db59a70fec3/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/73c1174d058df5dd681a08a4f654ac9e005f3d5692f80a2a53b7c56c9b674233/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/78c1177c6f774abab59268776d52b266986eacba73ef96724ce2be5c515f5395/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/093958aee6a27b8a55e84a80c4505dc70a7b27a14206f323247a9d0ffa81de57/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5b6b93aa01981f1baca5febbeafbbb439db2f5ef8990e8c007adfbc028a12590/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5f70df63e2f732988719a6d014f94b3a328fba37d4a34125cf8af29ff7540a9b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/340b65a455abca20ebad9d2d00acbfb9a3369bf5dd11582eda33b9acf79b676f/merged
tmpfs           1.0G  256K  1.0G   1% /var/lib/kubelet/pods/0acb39bc-71b2-4118-a2b3-0ef163f62651/volumes/kubernetes.io~projected/kube-api-access-jqmq7
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/5b44493129ccc577e465d5f3315dbcb549eb285f5b38c6ba00122896d899a1e4/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b6b25d8f4d3420b385d3b1fd78a9ff64d5d8b1dfc961f4975eabc57a5ebdf7ae/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/ebaf9e0e02f6bbb3b7f9da9cfa2e081d23ae319bc9c448ac80bb18579959f489/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/74c36eb3-333a-4c8f-af2b-d0a7da1fe0d0/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/74c36eb3-333a-4c8f-af2b-d0a7da1fe0d0/volumes/kubernetes.io~secret/etcd-client
tmpfs            15G     0   15G   0% /var/lib/kubelet/pods/74c36eb3-333a-4c8f-af2b-d0a7da1fe0d0/volumes/kubernetes.io~secret/encryption-config
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/74c36eb3-333a-4c8f-af2b-d0a7da1fe0d0/volumes/kubernetes.io~projected/kube-api-access-vsn8m
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/5ade87ba95ebde992ac9d67835a7fe5b2b688eb04868cbdf9953d20eb90c2eff/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b13ab0222ce7323c90dfe4188cd3bdd6749e4449b376771a9f42abe2a0ed2eb8/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/e82fd21305d1ded85aa9c378b0cc69cf72b64583d123d29f03ecac09bc643d2b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/cf018f9c2f8ec06f3ed6754bcf53dd37930545c616e8e47e7ba930c7cbd14faf/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b34fd49efabb09b0a1d0887331a31185be4eefdfe56d6ba426d8b9de1fdb6a25/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/71d4a4536df8e88773e46bfc88d8a46c68d564578a8b06cdc4c090903c2902a9/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/4539cb135b738c105561a119031050849090106053f060f9dfebeade2e854092/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/48fc0b5efc0b969d8c6e73fa2b74735a4be116281a454151102dcc6dc98b68f3/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/d3f1162d7db36679117b76f1b1edebbb08eb69396371885f888f361da7dbe977/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b5005404482c244a78c74dd2e4d44f7f62ae76e9e1d8f72bb07f99815336d059/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/53bb3ffeacffca6c379ed657ae9180c49b2bc2c6a3a57476e8aec706a83d88dd/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/098b8ace46a3364373ee9dbe98697b493e0fc764bbf07d174a0466a238e36fa1/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b88172b39e73b66b58317026795b65b3e2012f156ac8c6da69f552304efdf951/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/0ab1a7f1ac4b092627fa9c9cba2a2f3752aadb3d7773d732fea03e6bd30f839a/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/d76f0ab85843a58d275325e879c062a3cb512db5271ca1262db8f2b9e00b0a95/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/31259c200825f50dcb567b849f49b69423824bc67fe09bfbb5844d1e94b96ca9/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5ac5a808915a4694e4bbb1c473240fb2372807d40d921eeb1834ad10d9cf4151/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f9abfe08e56b962467e6258d17e178a6447f7cec169f9375f4271aecd6e981a2/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/b5f720dc-b2ca-49b0-8e06-1431749697f1/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/b5f720dc-b2ca-49b0-8e06-1431749697f1/volumes/kubernetes.io~projected/kube-api-access-kkffr
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/0caf934a1460b937014822e12280462212c969e08ced093abed20a19329355ec/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b699729d831f9ee9434e348c4687779212f9c257c31aa9824b7842a066121626/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/927ac8f2ad1a08775b8ce7ceafcc84d6a3451d43720828340a70e80e74032ed0/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/3f6c16cdb87754d38098dce1a079df83d613db3fc5fb8e77cc7e2e80e77c98ae/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/d1011eaf-7c5e-44e1-8d6e-f7f6aca778b1/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/c8d1e89c-700e-4cd8-bf96-ef453367fe10/volumes/kubernetes.io~secret/profile-collector-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/c8d1e89c-700e-4cd8-bf96-ef453367fe10/volumes/kubernetes.io~secret/srv-cert
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/d1011eaf-7c5e-44e1-8d6e-f7f6aca778b1/volumes/kubernetes.io~projected/kube-api-access-v52w7
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/6c4bf2b8-581a-4ddd-80f7-79f176570178/volumes/kubernetes.io~secret/webhook-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/6c4bf2b8-581a-4ddd-80f7-79f176570178/volumes/kubernetes.io~secret/apiservice-cert
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/c8d1e89c-700e-4cd8-bf96-ef453367fe10/volumes/kubernetes.io~projected/kube-api-access-4zfg7
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/6c4bf2b8-581a-4ddd-80f7-79f176570178/volumes/kubernetes.io~projected/kube-api-access-556zl
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/b2b01aaf-ff5e-483d-b75b-260d87a69f42/volumes/kubernetes.io~secret/webhook-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/b2b01aaf-ff5e-483d-b75b-260d87a69f42/volumes/kubernetes.io~secret/apiservice-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/c332f145-cd5b-47a1-bb94-d2b729a59607/volumes/kubernetes.io~secret/signing-key
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/c22a60ed3ff34187bde8a71a85ed2f08c76c169f4a53c06566cc801df4dfd9ee/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/b2b01aaf-ff5e-483d-b75b-260d87a69f42/volumes/kubernetes.io~projected/kube-api-access-78f5d
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/47682277b69ee70fc86f3ddda796bb60ce2f3ca9b51eaa133352da9e2281bcd5/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/787daa1f45b018c9347574bac3e88fbcf63bb121f23d20a469f138693ec387d6/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/bfa692aef2c6c01738b8dd3f7e1194aa8a0156b756e2054568893edabb103d1b/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/c332f145-cd5b-47a1-bb94-d2b729a59607/volumes/kubernetes.io~projected/kube-api-access-76zgn
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/8bd7e035-90db-4f6d-8fd2-61227e103766/volumes/kubernetes.io~secret/marketplace-operator-metrics
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/b8708b00fcb37b08e8b0000abb8f3850139386330a786db8037ba27092944d14/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/8bd7e035-90db-4f6d-8fd2-61227e103766/volumes/kubernetes.io~projected/kube-api-access-shlff
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/6167bb9a-2410-484f-849b-c5f66990140f/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/ec6fe2e5-0c9d-4c68-9246-164ef7f0895c/volumes/kubernetes.io~secret/serving-cert
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/6167bb9a-2410-484f-849b-c5f66990140f/volumes/kubernetes.io~projected/kube-api-access-2zj94
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/ec6fe2e5-0c9d-4c68-9246-164ef7f0895c/volumes/kubernetes.io~projected/kube-api-access-fqll8
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/f3d7111e93259d8ca4038ddd58397b7b01493d8710a732193bd47708cef7434d/userdata/shm
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/b91d4004-37d3-471a-aae0-78d7cb3e72af/volumes/kubernetes.io~secret/serving-cert
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/f809f145032c755036206515e0a79509a9b3cf3bcea9e8a4e32181738394e9c0/userdata/shm
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/b91d4004-37d3-471a-aae0-78d7cb3e72af/volumes/kubernetes.io~projected/kube-api-access-bqx7l
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/9d00a4cf82e07c87a87ffc750712b02af62d2e2dcd249ff5e0032c56fb5c92d9/userdata/shm
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/62b9a05195d549b414f7f4a03e7682535b6e1b7de76fa2b2e030f03bef752b0d/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/52506c8c3ff7566886a3c35eddabe227cb1a69469aac8620b95c9ea0cc950293/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2d6acaf2e69f67e58c17edce54cf4223457b3a45bf192e2a0c91dc2fd1cce72c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/520ed3bca7bf404aa64bd1523c805104068dee7ab92a9c4e8715e482d6080cb3/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/830c3b9fc186ebd3c8a702e35b7945f5c609de701aa6711bb254ef0d596b4d28/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/3692cad9237f6b0b6fa06aac3aa260ec66e3ab21834a208e152d83ae9e8fc0e7/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a124356d95e87080a26faac125959e371f97bde3a3cded1666c708ef6298142c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/e0957102e55fd77cc34c93bc4bdf125551fff0eac60feb3a858b535e45cd2332/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/9ec791757c9323f09bd3d696b19cd1abd90eaeb74ced2fde3cae9e9a78ea1903/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/aa99aac8946b20cec123c5208607446f3f90d04cb1f53f3d7ef1dd887ec42017/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2c6eff585e5883d42e1d439e3ee7208e8cec80678cc9bf0e38e632b271b267db/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a9c4466aee2b3eb6907502bbbcc3a06411864b8b103bcc0f6c9e05539a13ad10/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f56e57d4e17b253d83a026595d2b8c6095165eab117f834269a94dcc47638cb8/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/993b2e1d1294bb735ab2deb5aca534c35a60a19188e09f8319448b8e7ee12589/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/bb57e0fcfc9b7665a267c02ec45ea3fa2a6b16c8faa7729a0ee96e1a2994669e/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8e6515b67a55944323dfa7907cbc7550d2fb50a5bcd6f3c034f7ce80d3de79f0/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/fbb08bc213a2dc72b79480e9a7a1745fe5f1efa04639c8c611b69dc447abc29e/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f7ca051fab2b840043f08cb0c7e05f469e6919c16d227e744e08f0a4f0e91098/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/e424b81a00ff7b12b71be4c562dac0654113c745379bd2dbe80e0f012af3ecd2/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/79da8b4b992376d0dd849b44ffb5cb1b3e776f9d7b977f3a04c4b5dd76268639/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/8aabf43ec00ec0fe5fb120118025f2032f15831ff1714c01cf4201cd5d1a4a54/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/95ea3c8e78eadd700c696671d9af8d4cec51775edd0bcddea6f33d1922828b5c/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2b21c6bee3db63d3cf16b3b46df1d905f4f1de190e464bac42e460f699c190c1/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/eb881a78b814e685e522d9ebae8c361592afd7295f46b63683229ef8046abba7/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/5a06cc237eff175fb35c865a712ad37703a45efa06bc5dbf2864f65fe9ec8e89/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/f1bf513a30b72618f8601345e860065738b47fca5ddb437451fa162ff3910795/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/ea3e35b476f5c0089e101349709e2407372890e17eb2df9cda44f9a2e3fb0d12/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/d3e0b171c50cc3b36188ec1f2254b685273863a266a9896bdd6737b4aeb95deb/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/2067cdfdf2418875272767d34a2a89774da507e368de5ddf1d12680c15c75d61/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/34133c03e66c406f77e81de06c1bef4cb8bb233a3b848ef9d18aa6513c1f1ba6/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/e32b8e71dbd9979aca5e986910621ce7e0f7b80c0a145c0c692598f4f2e8c067/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/504caa0b1748deea5a9173b1942f896fa53cfd4230616aa589651daecaeaed9b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/c5c76564e394347a617ac900ed6b79de35d0b4e5f0f338a94036ac6ef268a2d4/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/cc3540cdef68e4c27495fab763d62fc0880a976486ad8f586eb4d51fb9a198cf/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/6006da855bf8ed01698c949d9e27a943cb0843e1a076dc3ae300ff305c506dda/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/a5ea18ecb340a8bf875f4b02e78a05b094e078dde063c98bb4bf072b9ea8e4c8/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/aebfd1a7960854fd0337e7c1cd2eaa5b42051d4756257e7c91826e8b6d2010c6/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/eb9b0f76a9452a42a36ab9abc7681005fc46ee871ea14e6aefa9c8dfd7dd6878/merged
tmpfs            15G  128K   15G   1% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-system-serving-cert
tmpfs            15G     0   15G   0% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-user-template-error
tmpfs            15G   64K   15G   1% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-system-router-certs
tmpfs            15G   64K   15G   1% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-system-session
tmpfs            15G  704K   15G   1% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-system-ocp-branding-template
tmpfs            15G     0   15G   0% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-user-template-provider-selection
tmpfs            15G     0   15G   0% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~secret/v4-0-config-user-template-login
tmpfs            15G  256K   15G   1% /var/lib/kubelet/pods/0f48beb7-e568-40fb-804e-f63444ccefe2/volumes/kubernetes.io~projected/kube-api-access-sf5bj
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/82c4e02a04047982723a25e80845b1adcefb638fb0fca42f5bb35e53e0079150/merged
shm              64M     0   64M   0% /run/containers/storage/overlay-containers/79e9d913207c9b2dc76dda1c0289120327eb6715be44a0cf53a2beb836852cf3/userdata/shm
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/94527efd6c1071a5fd63fb7f1d462f5a28a92f290052016c55ecac4b5aaaf05e/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/57ddadb12a13f7ab3df052bfaefa53e6d858aa55f8e4c25041d0dd96f8cd376b/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/6b02da4046d69676d31c2118a5d61c5f292e02a5a1196d5cf5058bf925c65aa0/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/b68714fd97c34cb91e890a31aa67f76cad6a17f6af51dbb0982282e251476092/merged
overlay         120G   17G  104G  14% /var/lib/containers/storage/overlay/3c8ef5c5a69295092e1414743452d4e1318b63dcb78f1cab13db59765906da63/merged

Comment 2 Alisha 2021-08-24 13:25:17 UTC
Platform is ppc64le.

OS info : 

on bastion : 
# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)

CoreOS nodes :
[core@master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.9

Comment 3 Manoj Kumar 2021-08-24 14:43:02 UTC
@alisha: What is the size of the disk that is being provisioned?

Can you provide the output of `lsblk` and `df` from the control plane and worker nodes?

Comment 4 Dan Li 2021-08-24 15:20:07 UTC
Taking this bug from the apiserver team as the multi-arch team thinks that the bug should be initially assigned to us. We will take a look at this bug and re-assign if necessary.

Comment 5 Alisha 2021-08-24 18:37:50 UTC
Master node : 
============

# ssh core@master-0 lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   120G  0 disk
├─sda1   8:1    0     4M  0 part
├─sda3   8:3    0   384M  0 part
└─sda4   8:4    0 119.6G  0 part /sysroot
sdb      8:16   0   120G  0 disk
├─sdb1   8:17   0     4M  0 part
├─sdb3   8:19   0   384M  0 part /boot
└─sdb4   8:20   0 119.6G  0 part

# ssh core@master-0 df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs         8267712        0   8267712   0% /dev
tmpfs            8346304      256   8346048   1% /dev/shm
tmpfs            8346304   453632   7892672   6% /run
tmpfs            8346304        0   8346304   0% /sys/fs/cgroup
/dev/sda4      125420524 11817044 113603480  10% /sysroot
tmpfs            8346304       64   8346240   1% /tmp
/dev/sdb3         372607   236374    112477  68% /boot
overlay          8346304   453632   7892672   6% /etc/NetworkManager/systemConnectionsMerged
tmpfs            1669248        0   1669248   0% /run/user/1000


Worker node : 
===========
# ssh core@worker-0 lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   120G  0 disk
├─sda1   8:1    0     4M  0 part
├─sda3   8:3    0   384M  0 part /boot
└─sda4   8:4    0 119.6G  0 part /sysroot
sdb      8:16   0   120G  0 disk
├─sdb1   8:17   0     4M  0 part
├─sdb3   8:19   0   384M  0 part
└─sdb4   8:20   0 119.6G  0 part

# ssh core@worker-0 df
Filesystem     1K-blocks    Used Available Use% Mounted on
devtmpfs         8267712       0   8267712   0% /dev
tmpfs            8346304     128   8346176   1% /dev/shm
tmpfs            8346304  246848   8099456   3% /run
tmpfs            8346304       0   8346304   0% /sys/fs/cgroup
/dev/sda4      125420524 8880312 116540212   8% /sysroot
tmpfs            8346304      64   8346240   1% /tmp
/dev/sda3         372607  236374    112477  68% /boot
overlay          8346304  246848   8099456   3% /etc/NetworkManager/systemConnectionsMerged
tmpfs            1669248      64   1669184   1% /run/user/1000


bastion :
=======

# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  120G  0 disk
├─sda1        8:1    0    4M  0 part
├─sda2        8:2    0  120G  0 part
└─mpatha    253:0    0  120G  0 mpath
  ├─mpatha1 253:2    0    4M  0 part
  └─mpatha2 253:3    0  120G  0 part  /
sdb           8:16   0  300G  0 disk
└─mpathb    253:1    0  300G  0 mpath /export
sdc           8:32   0  120G  0 disk
├─sdc1        8:33   0    4M  0 part
├─sdc2        8:34   0  120G  0 part
└─mpatha    253:0    0  120G  0 mpath
  ├─mpatha1 253:2    0    4M  0 part
  └─mpatha2 253:3    0  120G  0 part  /
sdd           8:48   0  300G  0 disk
└─mpathb    253:1    0  300G  0 mpath /export

# df
Filesystem          1K-blocks     Used Available Use% Mounted on
devtmpfs              7795776        0   7795776   0% /dev
tmpfs                 7831936      192   7831744   1% /dev/shm
tmpfs                 7831936   789760   7042176  11% /run
tmpfs                 7831936        0   7831936   0% /sys/fs/cgroup
/dev/mapper/mpatha2 125813740 11806620 114007120  10% /
/dev/mapper/mpathb  308587328   894708 291947596   1% /export
tmpfs                 1566336        0   1566336   0% /run/user/0

Comment 6 Prashanth Sundararaman 2021-08-24 21:50:49 UTC
From the oc output, it looks like this cluster was created 6 days ago and then there was some repeated failures which filled most of the /run tmpfs. This probably caused these cascading failures. Do we know when things started failing? was this cluster actively monitored to see if there were any issues? does the console have a history of alerts/events to report?

Comment 7 Prashanth Sundararaman 2021-08-24 22:35:27 UTC
could a must-gather also be collected on this cluster?

Comment 8 mkumatag 2021-08-25 04:03:11 UTC
can we login to the machine where we are hitting this out of space issue and see top 10 directories contributing to this,


du -ha /run | sort -n -r | head -n 10

Comment 9 Alisha 2021-08-25 06:49:26 UTC
Output from a cluster with 4.9.0-0.nightly-ppc64le-2021-08-19-120135
Created 4 days ago. Which too has the same issue.

# du -ha /run | sort -n -r | head -n 10
450M    /run
448K    /run/cloud-init
443M    /run/log/journal
443M    /run/log
427M    /run/log/journal/8c700d6e302948dd8d2bda2d8db6bee4
320K    /run/NetworkManager
192K    /run/systemd/generator
192K    /run/NetworkManager/devices
128K    /run/systemd/transient
128K    /run/systemd/sessions

Comment 10 Alisha 2021-08-25 12:32:30 UTC
must-gather exited with timeout :

as shown below : 


# oc adm must-gather
[must-gather      ] OUT Using must-gather plug-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:945aa0a5d3d96df076bf139b19910683ac20fd63dd385821261c7317d25a3cb3
When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information.
ClusterID: e9554f1e-e262-44b2-acf1-7274eb50a491
ClusterVersion: Stable at "4.9.0-0.nightly-ppc64le-2021-08-19-120135"
ClusterOperators:
        clusteroperator/authentication is degraded because APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in apiserver-6774bf576b-8p6f9 pod)
        clusteroperator/console is not available (DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-ppc64le-2021-08-19-120135) because All is well
        clusteroperator/kube-apiserver is degraded because StaticPodsDegraded: pod/kube-apiserver-rdr-cicd-e6b7-mon01-master-0 container "kube-apiserver" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json607499636: no space left on device
        clusteroperator/kube-controller-manager is degraded because StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-0 container "cluster-policy-controller" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json789204288: no space left on device
StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-0 container "kube-controller-manager" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json031946758: no space left on device
StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-1 container "cluster-policy-controller" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json127458881: no space left on device
        clusteroperator/kube-scheduler is degraded because StaticPodsDegraded: pod/openshift-kube-scheduler-rdr-cicd-e6b7-mon01-master-0 container "kube-scheduler" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json226094531: no space left on device
StaticPodsDegraded: pod/openshift-kube-scheduler-rdr-cicd-e6b7-mon01-master-1 container "kube-scheduler" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json081094887: no space left on device
        clusteroperator/monitoring is not available (Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.) because Failed to rollout the stack. Error: updating prometheus-adapter: reconciling PrometheusAdapter Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-adapter: expected 3 replicas, got 2 updated replicas
updating prometheus-k8s: waiting for Prometheus object changes failed: waiting for Prometheus openshift-monitoring/k8s: expected 2 replicas, got 1 updated replicas
        clusteroperator/network is degraded because DaemonSet "openshift-sdn/sdn" rollout is not making progress - last change 2021-08-25T03:05:51Z
        clusteroperator/service-ca is degraded because Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps signing-cabundle)


[must-gather      ] OUT namespace/openshift-must-gather-9qjbv created
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-bcltb created
[must-gather      ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:945aa0a5d3d96df076bf139b19910683ac20fd63dd385821261c7317d25a3cb3 created
[must-gather-rd462] OUT gather did not start: timed out waiting for the condition
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-bcltb deleted
[must-gather      ] OUT namespace/openshift-must-gather-9qjbv deleted


When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information.
ClusterID: e9554f1e-e262-44b2-acf1-7274eb50a491
ClusterVersion: Stable at "4.9.0-0.nightly-ppc64le-2021-08-19-120135"
ClusterOperators:
        clusteroperator/authentication is degraded because APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in apiserver-6774bf576b-8p6f9 pod)
        clusteroperator/console is not available (DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-ppc64le-2021-08-19-120135) because All is well
        clusteroperator/kube-apiserver is degraded because StaticPodsDegraded: pod/kube-apiserver-rdr-cicd-e6b7-mon01-master-0 container "kube-apiserver" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json841485797: no space left on device
        clusteroperator/kube-controller-manager is degraded because StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-0 container "cluster-policy-controller" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json094090288: no space left on device
StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-0 container "kube-controller-manager" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json185948185: no space left on device
StaticPodsDegraded: pod/kube-controller-manager-rdr-cicd-e6b7-mon01-master-1 container "cluster-policy-controller" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json080461409: no space left on device
        clusteroperator/kube-scheduler is degraded because StaticPodsDegraded: pod/openshift-kube-scheduler-rdr-cicd-e6b7-mon01-master-0 container "kube-scheduler" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json009285108: no space left on device
StaticPodsDegraded: pod/openshift-kube-scheduler-rdr-cicd-e6b7-mon01-master-1 container "kube-scheduler" is waiting: CreateContainerError: open /run/containers/storage/overlay-layers/.tmp-mountpoints.json735593991: no space left on device
        clusteroperator/monitoring is not available (Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.) because Failed to rollout the stack. Error: updating prometheus-adapter: reconciling PrometheusAdapter Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-adapter: expected 3 replicas, got 2 updated replicas
updating prometheus-k8s: waiting for Prometheus object changes failed: waiting for Prometheus openshift-monitoring/k8s: expected 2 replicas, got 1 updated replicas
        clusteroperator/network is degraded because DaemonSet "openshift-sdn/sdn" rollout is not making progress - last change 2021-08-25T03:05:51Z
        clusteroperator/service-ca is degraded because Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps signing-cabundle)


Gathering data for ns/openshift-config...
Gathering data for ns/openshift-config-managed...
Gathering data for ns/openshift-authentication...
Gathering data for ns/openshift-authentication-operator...
Gathering data for ns/openshift-ingress...
Gathering data for ns/openshift-oauth-apiserver...
Gathering data for ns/openshift-machine-api...
Gathering data for ns/openshift-cloud-controller-manager-operator...
Gathering data for ns/openshift-cloud-controller-manager...
Gathering data for ns/openshift-cloud-credential-operator...
Gathering data for ns/openshift-config-operator...
Gathering data for ns/openshift-console-operator...
Gathering data for ns/openshift-console...
Gathering data for ns/openshift-cluster-storage-operator...
Gathering data for ns/openshift-dns-operator...
Gathering data for ns/openshift-dns...
Gathering data for ns/openshift-etcd-operator...
Gathering data for ns/openshift-etcd...
Gathering data for ns/openshift-image-registry...
Gathering data for ns/openshift-ingress-operator...
Gathering data for ns/openshift-ingress-canary...
Gathering data for ns/openshift-insights...
Gathering data for ns/openshift-kube-apiserver-operator...
Gathering data for ns/openshift-kube-apiserver...
Gathering data for ns/openshift-kube-controller-manager...
Gathering data for ns/openshift-kube-controller-manager-operator...
Gathering data for ns/openshift-kube-scheduler...
Gathering data for ns/openshift-kube-scheduler-operator...
Gathering data for ns/openshift-kube-storage-version-migrator...
Gathering data for ns/openshift-kube-storage-version-migrator-operator...
Gathering data for ns/openshift-cluster-machine-approver...
Gathering data for ns/openshift-machine-config-operator...
Gathering data for ns/openshift-kni-infra...
Gathering data for ns/openshift-openstack-infra...
Gathering data for ns/openshift-ovirt-infra...
Gathering data for ns/openshift-vsphere-infra...
Gathering data for ns/openshift-marketplace...
Gathering data for ns/openshift-monitoring...
Gathering data for ns/openshift-user-workload-monitoring...
Gathering data for ns/openshift-multus...
Gathering data for ns/openshift-sdn...
Gathering data for ns/openshift-host-network...
Gathering data for ns/openshift-network-diagnostics...
Gathering data for ns/openshift-network-operator...
Gathering data for ns/openshift-cluster-node-tuning-operator...
Gathering data for ns/openshift-apiserver-operator...
Gathering data for ns/openshift-apiserver...
Gathering data for ns/openshift-controller-manager-operator...
Gathering data for ns/openshift-controller-manager...
Gathering data for ns/openshift-cluster-samples-operator...
Gathering data for ns/openshift...
Gathering data for ns/openshift-operator-lifecycle-manager...
Gathering data for ns/openshift-service-ca-operator...
Gathering data for ns/openshift-service-ca...
Gathering data for ns/openshift-cluster-csi-drivers...
Wrote inspect data to must-gather.local.4336199250967024295.
error running backup collection: errors ocurred while gathering data:
    [unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request, skipping gathering namespaces/openshift-authentication due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-authentication

    one or more errors ocurred while gathering container data for pod oauth-openshift-7d57f74c66-tb4bk:

    [container "oauth-openshift" in pod "oauth-openshift-7d57f74c66-tb4bk" is waiting to start: ContainerCreating, previous terminated container "oauth-openshift" in pod "oauth-openshift-7d57f74c66-tb4bk" not found], skipping gathering namespaces/openshift-machine-api due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-machine-api

    [one or more errors ocurred while gathering container data for pod cluster-baremetal-operator-749d46f4d9-k6pzq:

    [container "cluster-baremetal-operator" in pod "cluster-baremetal-operator-749d46f4d9-k6pzq" is waiting to start: ContainerCreating, previous terminated container "cluster-baremetal-operator" in pod "cluster-baremetal-operator-749d46f4d9-k6pzq" not found, previous terminated container "kube-rbac-proxy" in pod "cluster-baremetal-operator-749d46f4d9-k6pzq" not found, container "kube-rbac-proxy" in pod "cluster-baremetal-operator-749d46f4d9-k6pzq" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod machine-api-operator-7b88cf8d4-cljtt:

    [container "kube-rbac-proxy" in pod "machine-api-operator-7b88cf8d4-cljtt" is waiting to start: ContainerCreating, previous terminated container "kube-rbac-proxy" in pod "machine-api-operator-7b88cf8d4-cljtt" not found, container "machine-api-operator" in pod "machine-api-operator-7b88cf8d4-cljtt" is waiting to start: ContainerCreating, previous terminated container "machine-api-operator" in pod "machine-api-operator-7b88cf8d4-cljtt" not found]], skipping gathering namespaces/openshift-cloud-credential-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-cloud-credential-operator

    one or more errors ocurred while gathering container data for pod cloud-credential-operator-f46d67f5b-g2khs:

    [previous terminated container "kube-rbac-proxy" in pod "cloud-credential-operator-f46d67f5b-g2khs" not found, container "kube-rbac-proxy" in pod "cloud-credential-operator-f46d67f5b-g2khs" is waiting to start: ContainerCreating, previous terminated container "cloud-credential-operator" in pod "cloud-credential-operator-f46d67f5b-g2khs" not found, container "cloud-credential-operator" in pod "cloud-credential-operator-f46d67f5b-g2khs" is waiting to start: ContainerCreating], skipping gathering namespaces/openshift-config-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-config-operator

    one or more errors ocurred while gathering container data for pod openshift-config-operator-748d9d4d57-hjp4v:

    [previous terminated container "openshift-config-operator" in pod "openshift-config-operator-748d9d4d57-hjp4v" not found, container "openshift-config-operator" in pod "openshift-config-operator-748d9d4d57-hjp4v" is waiting to start: ContainerCreating], skipping gathering namespaces/openshift-console due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-console

    [one or more errors ocurred while gathering container data for pod console-85b77c55f5-zfl6h:

    [previous terminated container "console" in pod "console-85b77c55f5-zfl6h" not found, container "console" in pod "console-85b77c55f5-zfl6h" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod downloads-7899d55494-6n2kl:

    [container "download-server" in pod "downloads-7899d55494-6n2kl" is waiting to start: ContainerCreating, previous terminated container "download-server" in pod "downloads-7899d55494-6n2kl" not found]], skipping gathering namespaces/openshift-cluster-storage-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-cluster-storage-operator

    [one or more errors ocurred while gathering container data for pod csi-snapshot-controller-74cbcb446f-q7bff:

    [previous terminated container "snapshot-controller" in pod "csi-snapshot-controller-74cbcb446f-q7bff" not found, container "snapshot-controller" in pod "csi-snapshot-controller-74cbcb446f-q7bff" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod csi-snapshot-controller-operator-6b594f57bf-85qt7:

    [container "csi-snapshot-controller-operator" in pod "csi-snapshot-controller-operator-6b594f57bf-85qt7" is waiting to start: ContainerCreating, previous terminated container "csi-snapshot-controller-operator" in pod "csi-snapshot-controller-operator-6b594f57bf-85qt7" not found], one or more errors ocurred while gathering container data for pod csi-snapshot-webhook-6565589b6f-w2m5n:

    [previous terminated container "webhook" in pod "csi-snapshot-webhook-6565589b6f-w2m5n" not found, container "webhook" in pod "csi-snapshot-webhook-6565589b6f-w2m5n" is waiting to start: ContainerCreating]], skipping gathering namespaces/openshift-dns-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-dns-operator

    one or more errors ocurred while gathering container data for pod dns-operator-5888f98bfd-gltf8:

    [previous terminated container "dns-operator" in pod "dns-operator-5888f98bfd-gltf8" not found, container "dns-operator" in pod "dns-operator-5888f98bfd-gltf8" is waiting to start: ContainerCreating, container "kube-rbac-proxy" in pod "dns-operator-5888f98bfd-gltf8" is waiting to start: ContainerCreating, previous terminated container "kube-rbac-proxy" in pod "dns-operator-5888f98bfd-gltf8" not found], skipping gathering namespaces/openshift-etcd-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-etcd-operator

    one or more errors ocurred while gathering container data for pod etcd-operator-7bdb4b7779-rdsk9:

    [previous terminated container "etcd-operator" in pod "etcd-operator-7bdb4b7779-rdsk9" not found, container "etcd-operator" in pod "etcd-operator-7bdb4b7779-rdsk9" is waiting to start: ContainerCreating], skipping gathering namespaces/openshift-image-registry due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-image-registry

    one or more errors ocurred while gathering container data for pod image-pruner-27164160--1-2777n:

    [container "image-pruner" in pod "image-pruner-27164160--1-2777n" is waiting to start: ContainerCreating, previous terminated container "image-pruner" in pod "image-pruner-27164160--1-2777n" not found], skipping gathering namespaces/openshift-ingress-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-ingress-operator

    one or more errors ocurred while gathering container data for pod ingress-operator-7448c475d8-p4ssm:

    [container "ingress-operator" in pod "ingress-operator-7448c475d8-p4ssm" is waiting to start: ContainerCreating, previous terminated container "ingress-operator" in pod "ingress-operator-7448c475d8-p4ssm" not found, container "kube-rbac-proxy" in pod "ingress-operator-7448c475d8-p4ssm" is waiting to start: ContainerCreating, previous terminated container "kube-rbac-proxy" in pod "ingress-operator-7448c475d8-p4ssm" not found], skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering namespaces/openshift-cluster-machine-approver due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-cluster-machine-approver

    one or more errors ocurred while gathering container data for pod machine-approver-697778c65-v4d56:

    [previous terminated container "kube-rbac-proxy" in pod "machine-approver-697778c65-v4d56" not found, container "kube-rbac-proxy" in pod "machine-approver-697778c65-v4d56" is waiting to start: ContainerCreating, container "machine-approver-controller" in pod "machine-approver-697778c65-v4d56" is waiting to start: ContainerCreating, previous terminated container "machine-approver-controller" in pod "machine-approver-697778c65-v4d56" not found], skipping gathering namespaces/openshift-machine-config-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-machine-config-operator

    [one or more errors ocurred while gathering container data for pod machine-config-controller-56449d7c78-9f4xp:

    [previous terminated container "machine-config-controller" in pod "machine-config-controller-56449d7c78-9f4xp" not found, container "machine-config-controller" in pod "machine-config-controller-56449d7c78-9f4xp" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod machine-config-operator-69977ff6f9-n7csm:

    [previous terminated container "machine-config-operator" in pod "machine-config-operator-69977ff6f9-n7csm" not found, container "machine-config-operator" in pod "machine-config-operator-69977ff6f9-n7csm" is waiting to start: ContainerCreating]], skipping gathering namespaces/openshift-marketplace due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-marketplace

    [one or more errors ocurred while gathering container data for pod certified-operators-g9qkv:

    [container "registry-server" in pod "certified-operators-g9qkv" is waiting to start: ImageInspectError, previous terminated container "registry-server" in pod "certified-operators-g9qkv" not found], one or more errors ocurred while gathering container data for pod community-operators-fn2f4:

    [container "registry-server" in pod "community-operators-fn2f4" is waiting to start: ImageInspectError, previous terminated container "registry-server" in pod "community-operators-fn2f4" not found], one or more errors ocurred while gathering container data for pod redhat-marketplace-gsrdp:

    [previous terminated container "registry-server" in pod "redhat-marketplace-gsrdp" not found, container "registry-server" in pod "redhat-marketplace-gsrdp" is waiting to start: ImageInspectError], one or more errors ocurred while gathering container data for pod redhat-operators-6jxrl:

    [previous terminated container "registry-server" in pod "redhat-operators-6jxrl" not found, container "registry-server" in pod "redhat-operators-6jxrl" is waiting to start: ImageInspectError]], skipping gathering namespaces/openshift-monitoring due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-monitoring

    [one or more errors ocurred while gathering container data for pod cluster-monitoring-operator-7cbbf987c4-gxdbp:

    [previous terminated container "kube-rbac-proxy" in pod "cluster-monitoring-operator-7cbbf987c4-gxdbp" not found, container "kube-rbac-proxy" in pod "cluster-monitoring-operator-7cbbf987c4-gxdbp" is waiting to start: ContainerCreating, previous terminated container "cluster-monitoring-operator" in pod "cluster-monitoring-operator-7cbbf987c4-gxdbp" not found, container "cluster-monitoring-operator" in pod "cluster-monitoring-operator-7cbbf987c4-gxdbp" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod prometheus-adapter-6d9c5fd87-nlcnb:

    [previous terminated container "prometheus-adapter" in pod "prometheus-adapter-6d9c5fd87-nlcnb" not found, container "prometheus-adapter" in pod "prometheus-adapter-6d9c5fd87-nlcnb" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod prometheus-k8s-0:

    [container "prometheus" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, previous terminated container "prometheus" in pod "prometheus-k8s-0" not found, container "config-reloader" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, previous terminated container "config-reloader" in pod "prometheus-k8s-0" not found, container "thanos-sidecar" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, previous terminated container "thanos-sidecar" in pod "prometheus-k8s-0" not found, previous terminated container "prometheus-proxy" in pod "prometheus-k8s-0" not found, container "prometheus-proxy" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, container "kube-rbac-proxy" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, previous terminated container "kube-rbac-proxy" in pod "prometheus-k8s-0" not found, previous terminated container "prom-label-proxy" in pod "prometheus-k8s-0" not found, container "prom-label-proxy" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, container "kube-rbac-proxy-thanos" in pod "prometheus-k8s-0" is waiting to start: PodInitializing, previous terminated container "kube-rbac-proxy-thanos" in pod "prometheus-k8s-0" not found, previous terminated container "init-config-reloader" in pod "prometheus-k8s-0" not found, container "init-config-reloader" in pod "prometheus-k8s-0" is waiting to start: PodInitializing]], skipping gathering EgressFirewall.k8s.ovn.org due to error: the server doesn't have a resource type "EgressFirewall", skipping gathering EgressIP.k8s.ovn.org due to error: the server doesn't have a resource type "EgressIP", skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering namespaces/openshift-cluster-samples-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-cluster-samples-operator

    one or more errors ocurred while gathering container data for pod cluster-samples-operator-664497f4d8-dgx4d:

    [previous terminated container "cluster-samples-operator" in pod "cluster-samples-operator-664497f4d8-dgx4d" not found, container "cluster-samples-operator" in pod "cluster-samples-operator-664497f4d8-dgx4d" is waiting to start: ContainerCreating, container "cluster-samples-operator-watch" in pod "cluster-samples-operator-664497f4d8-dgx4d" is waiting to start: ContainerCreating, previous terminated container "cluster-samples-operator-watch" in pod "cluster-samples-operator-664497f4d8-dgx4d" not found], skipping gathering namespaces/openshift-operator-lifecycle-manager due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-operator-lifecycle-manager

    [one or more errors ocurred while gathering container data for pod collect-profiles-27159675--1-jjxs2:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27159675--1-jjxs2" not found, container "collect-profiles" in pod "collect-profiles-27159675--1-jjxs2" is waiting to start: CreateContainerError], one or more errors ocurred while gathering container data for pod collect-profiles-27159840--1-rjqd4:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27159840--1-rjqd4" not found, container "collect-profiles" in pod "collect-profiles-27159840--1-rjqd4" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27161040--1-sj9z8:

    [container "collect-profiles" in pod "collect-profiles-27161040--1-sj9z8" is waiting to start: CreateContainerError, previous terminated container "collect-profiles" in pod "collect-profiles-27161040--1-sj9z8" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27161265--1-znp5s:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27161265--1-znp5s" not found, container "collect-profiles" in pod "collect-profiles-27161265--1-znp5s" is waiting to start: CreateContainerError], one or more errors ocurred while gathering container data for pod collect-profiles-27161310--1-wd5vb:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27161310--1-wd5vb" not found, container "collect-profiles" in pod "collect-profiles-27161310--1-wd5vb" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27161460--1-q7gfb:

    [container "collect-profiles" in pod "collect-profiles-27161460--1-q7gfb" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27161460--1-q7gfb" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27161490--1-jwwm5:

    [container "collect-profiles" in pod "collect-profiles-27161490--1-jwwm5" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27161490--1-jwwm5" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163125--1-2bh7c:

    [container "collect-profiles" in pod "collect-profiles-27163125--1-2bh7c" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163125--1-2bh7c" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163140--1-vvj6l:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163140--1-vvj6l" not found, container "collect-profiles" in pod "collect-profiles-27163140--1-vvj6l" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163155--1-n22lp:

    [container "collect-profiles" in pod "collect-profiles-27163155--1-n22lp" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163155--1-n22lp" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163170--1-9w5sl:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163170--1-9w5sl" not found, container "collect-profiles" in pod "collect-profiles-27163170--1-9w5sl" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163185--1-m6gr5:

    [container "collect-profiles" in pod "collect-profiles-27163185--1-m6gr5" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163185--1-m6gr5" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163200--1-9jpk8:

    [container "collect-profiles" in pod "collect-profiles-27163200--1-9jpk8" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163200--1-9jpk8" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163215--1-d2lrd:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163215--1-d2lrd" not found, container "collect-profiles" in pod "collect-profiles-27163215--1-d2lrd" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163230--1-bj2vg:

    [container "collect-profiles" in pod "collect-profiles-27163230--1-bj2vg" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163230--1-bj2vg" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163245--1-wqlrl:

    [container "collect-profiles" in pod "collect-profiles-27163245--1-wqlrl" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163245--1-wqlrl" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163260--1-jjnps:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163260--1-jjnps" not found, container "collect-profiles" in pod "collect-profiles-27163260--1-jjnps" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163275--1-2xzsm:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163275--1-2xzsm" not found, container "collect-profiles" in pod "collect-profiles-27163275--1-2xzsm" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163290--1-hssds:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163290--1-hssds" not found, container "collect-profiles" in pod "collect-profiles-27163290--1-hssds" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163305--1-9s8qc:

    [container "collect-profiles" in pod "collect-profiles-27163305--1-9s8qc" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163305--1-9s8qc" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163320--1-c86v5:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163320--1-c86v5" not found, container "collect-profiles" in pod "collect-profiles-27163320--1-c86v5" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163335--1-ssrpk:

    [container "collect-profiles" in pod "collect-profiles-27163335--1-ssrpk" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163335--1-ssrpk" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163350--1-xstrt:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163350--1-xstrt" not found, container "collect-profiles" in pod "collect-profiles-27163350--1-xstrt" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163365--1-q468s:

    [container "collect-profiles" in pod "collect-profiles-27163365--1-q468s" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163365--1-q468s" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163380--1-xsht6:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163380--1-xsht6" not found, container "collect-profiles" in pod "collect-profiles-27163380--1-xsht6" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163395--1-njd2g:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163395--1-njd2g" not found, container "collect-profiles" in pod "collect-profiles-27163395--1-njd2g" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163410--1-k9gqd:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163410--1-k9gqd" not found, container "collect-profiles" in pod "collect-profiles-27163410--1-k9gqd" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163425--1-x7d9p:

    [container "collect-profiles" in pod "collect-profiles-27163425--1-x7d9p" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163425--1-x7d9p" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163440--1-wpphx:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163440--1-wpphx" not found, container "collect-profiles" in pod "collect-profiles-27163440--1-wpphx" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163455--1-qpjrd:

    [container "collect-profiles" in pod "collect-profiles-27163455--1-qpjrd" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163455--1-qpjrd" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163470--1-gftz4:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163470--1-gftz4" not found, container "collect-profiles" in pod "collect-profiles-27163470--1-gftz4" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163485--1-46dbd:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163485--1-46dbd" not found, container "collect-profiles" in pod "collect-profiles-27163485--1-46dbd" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163500--1-c4r7q:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163500--1-c4r7q" not found, container "collect-profiles" in pod "collect-profiles-27163500--1-c4r7q" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163515--1-tgzvc:

    [container "collect-profiles" in pod "collect-profiles-27163515--1-tgzvc" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163515--1-tgzvc" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163530--1-p6rqv:

    [container "collect-profiles" in pod "collect-profiles-27163530--1-p6rqv" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163530--1-p6rqv" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163545--1-gv4k8:

    [container "collect-profiles" in pod "collect-profiles-27163545--1-gv4k8" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163545--1-gv4k8" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163560--1-j7647:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163560--1-j7647" not found, container "collect-profiles" in pod "collect-profiles-27163560--1-j7647" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163575--1-49kqt:

    [container "collect-profiles" in pod "collect-profiles-27163575--1-49kqt" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163575--1-49kqt" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163590--1-gz22k:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163590--1-gz22k" not found, container "collect-profiles" in pod "collect-profiles-27163590--1-gz22k" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163605--1-swvbl:

    [container "collect-profiles" in pod "collect-profiles-27163605--1-swvbl" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163605--1-swvbl" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163620--1-kvbjw:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163620--1-kvbjw" not found, container "collect-profiles" in pod "collect-profiles-27163620--1-kvbjw" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163635--1-gcjzm:

    [container "collect-profiles" in pod "collect-profiles-27163635--1-gcjzm" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163635--1-gcjzm" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163650--1-hngp8:

    [container "collect-profiles" in pod "collect-profiles-27163650--1-hngp8" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163650--1-hngp8" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163665--1-l7jgm:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163665--1-l7jgm" not found, container "collect-profiles" in pod "collect-profiles-27163665--1-l7jgm" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163680--1-nq44n:

    [container "collect-profiles" in pod "collect-profiles-27163680--1-nq44n" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163680--1-nq44n" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163695--1-r7phm:

    [container "collect-profiles" in pod "collect-profiles-27163695--1-r7phm" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163695--1-r7phm" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163710--1-zg66z:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163710--1-zg66z" not found, container "collect-profiles" in pod "collect-profiles-27163710--1-zg66z" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163725--1-j8wm2:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163725--1-j8wm2" not found, container "collect-profiles" in pod "collect-profiles-27163725--1-j8wm2" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163740--1-r7pmr:

    [container "collect-profiles" in pod "collect-profiles-27163740--1-r7pmr" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163740--1-r7pmr" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163755--1-tbnzc:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163755--1-tbnzc" not found, container "collect-profiles" in pod "collect-profiles-27163755--1-tbnzc" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163770--1-kh6ct:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163770--1-kh6ct" not found, container "collect-profiles" in pod "collect-profiles-27163770--1-kh6ct" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163785--1-bl7r2:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163785--1-bl7r2" not found, container "collect-profiles" in pod "collect-profiles-27163785--1-bl7r2" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163800--1-gd5sg:

    [container "collect-profiles" in pod "collect-profiles-27163800--1-gd5sg" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163800--1-gd5sg" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163815--1-glf4d:

    [container "collect-profiles" in pod "collect-profiles-27163815--1-glf4d" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163815--1-glf4d" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163830--1-ds752:

    [container "collect-profiles" in pod "collect-profiles-27163830--1-ds752" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163830--1-ds752" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163845--1-q5ccz:

    [container "collect-profiles" in pod "collect-profiles-27163845--1-q5ccz" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163845--1-q5ccz" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163860--1-78t27:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163860--1-78t27" not found, container "collect-profiles" in pod "collect-profiles-27163860--1-78t27" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163875--1-7x9p9:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163875--1-7x9p9" not found, container "collect-profiles" in pod "collect-profiles-27163875--1-7x9p9" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163890--1-xhgfp:

    [container "collect-profiles" in pod "collect-profiles-27163890--1-xhgfp" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163890--1-xhgfp" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163905--1-x9zjv:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163905--1-x9zjv" not found, container "collect-profiles" in pod "collect-profiles-27163905--1-x9zjv" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163920--1-v688l:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163920--1-v688l" not found, container "collect-profiles" in pod "collect-profiles-27163920--1-v688l" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163935--1-cxd5l:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163935--1-cxd5l" not found, container "collect-profiles" in pod "collect-profiles-27163935--1-cxd5l" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163950--1-ww2s8:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163950--1-ww2s8" not found, container "collect-profiles" in pod "collect-profiles-27163950--1-ww2s8" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163965--1-zzx6d:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27163965--1-zzx6d" not found, container "collect-profiles" in pod "collect-profiles-27163965--1-zzx6d" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27163980--1-cp7wf:

    [container "collect-profiles" in pod "collect-profiles-27163980--1-cp7wf" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163980--1-cp7wf" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27163995--1-5flwd:

    [container "collect-profiles" in pod "collect-profiles-27163995--1-5flwd" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27163995--1-5flwd" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164010--1-g7l9f:

    [container "collect-profiles" in pod "collect-profiles-27164010--1-g7l9f" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164010--1-g7l9f" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164025--1-lhzrl:

    [container "collect-profiles" in pod "collect-profiles-27164025--1-lhzrl" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164025--1-lhzrl" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164040--1-qsbmh:

    [container "collect-profiles" in pod "collect-profiles-27164040--1-qsbmh" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164040--1-qsbmh" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164055--1-wlmcg:

    [container "collect-profiles" in pod "collect-profiles-27164055--1-wlmcg" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164055--1-wlmcg" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164070--1-2nqf6:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164070--1-2nqf6" not found, container "collect-profiles" in pod "collect-profiles-27164070--1-2nqf6" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164085--1-j8sld:

    [container "collect-profiles" in pod "collect-profiles-27164085--1-j8sld" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164085--1-j8sld" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164100--1-9f4rz:

    [container "collect-profiles" in pod "collect-profiles-27164100--1-9f4rz" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164100--1-9f4rz" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164115--1-5tptl:

    [container "collect-profiles" in pod "collect-profiles-27164115--1-5tptl" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164115--1-5tptl" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164130--1-8tzqj:

    [container "collect-profiles" in pod "collect-profiles-27164130--1-8tzqj" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164130--1-8tzqj" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164145--1-z6v5h:

    [container "collect-profiles" in pod "collect-profiles-27164145--1-z6v5h" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164145--1-z6v5h" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164160--1-knq2g:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164160--1-knq2g" not found, container "collect-profiles" in pod "collect-profiles-27164160--1-knq2g" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164175--1-4wq2v:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164175--1-4wq2v" not found, container "collect-profiles" in pod "collect-profiles-27164175--1-4wq2v" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164190--1-w4f2x:

    [container "collect-profiles" in pod "collect-profiles-27164190--1-w4f2x" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164190--1-w4f2x" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164205--1-zxhs7:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164205--1-zxhs7" not found, container "collect-profiles" in pod "collect-profiles-27164205--1-zxhs7" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164220--1-hf56p:

    [container "collect-profiles" in pod "collect-profiles-27164220--1-hf56p" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164220--1-hf56p" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164235--1-pqvsd:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164235--1-pqvsd" not found, container "collect-profiles" in pod "collect-profiles-27164235--1-pqvsd" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164250--1-h6w56:

    [container "collect-profiles" in pod "collect-profiles-27164250--1-h6w56" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164250--1-h6w56" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164265--1-q9lzk:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164265--1-q9lzk" not found, container "collect-profiles" in pod "collect-profiles-27164265--1-q9lzk" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164280--1-fwwr4:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164280--1-fwwr4" not found, container "collect-profiles" in pod "collect-profiles-27164280--1-fwwr4" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164295--1-zlrvb:

    [container "collect-profiles" in pod "collect-profiles-27164295--1-zlrvb" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164295--1-zlrvb" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164310--1-hgjjj:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164310--1-hgjjj" not found, container "collect-profiles" in pod "collect-profiles-27164310--1-hgjjj" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164325--1-wgxsf:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164325--1-wgxsf" not found, container "collect-profiles" in pod "collect-profiles-27164325--1-wgxsf" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164340--1-2slj5:

    [container "collect-profiles" in pod "collect-profiles-27164340--1-2slj5" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164340--1-2slj5" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164355--1-62nvd:

    [container "collect-profiles" in pod "collect-profiles-27164355--1-62nvd" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164355--1-62nvd" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164370--1-kcnsg:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164370--1-kcnsg" not found, container "collect-profiles" in pod "collect-profiles-27164370--1-kcnsg" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164385--1-fndc6:

    [container "collect-profiles" in pod "collect-profiles-27164385--1-fndc6" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164385--1-fndc6" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164400--1-2nzmb:

    [container "collect-profiles" in pod "collect-profiles-27164400--1-2nzmb" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164400--1-2nzmb" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164415--1-ptb8r:

    [container "collect-profiles" in pod "collect-profiles-27164415--1-ptb8r" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164415--1-ptb8r" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164430--1-xjlh2:

    [container "collect-profiles" in pod "collect-profiles-27164430--1-xjlh2" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164430--1-xjlh2" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164445--1-svfq4:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164445--1-svfq4" not found, container "collect-profiles" in pod "collect-profiles-27164445--1-svfq4" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164460--1-qpzmf:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164460--1-qpzmf" not found, container "collect-profiles" in pod "collect-profiles-27164460--1-qpzmf" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164475--1-4sxsk:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164475--1-4sxsk" not found, container "collect-profiles" in pod "collect-profiles-27164475--1-4sxsk" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164490--1-qsc49:

    [container "collect-profiles" in pod "collect-profiles-27164490--1-qsc49" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164490--1-qsc49" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164505--1-qfc6r:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164505--1-qfc6r" not found, container "collect-profiles" in pod "collect-profiles-27164505--1-qfc6r" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164520--1-xpbq5:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164520--1-xpbq5" not found, container "collect-profiles" in pod "collect-profiles-27164520--1-xpbq5" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164535--1-z7k4n:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164535--1-z7k4n" not found, container "collect-profiles" in pod "collect-profiles-27164535--1-z7k4n" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164550--1-9rq2x:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164550--1-9rq2x" not found, container "collect-profiles" in pod "collect-profiles-27164550--1-9rq2x" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164565--1-shv6q:

    [container "collect-profiles" in pod "collect-profiles-27164565--1-shv6q" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164565--1-shv6q" not found], one or more errors ocurred while gathering container data for pod collect-profiles-27164580--1-gxcng:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164580--1-gxcng" not found, container "collect-profiles" in pod "collect-profiles-27164580--1-gxcng" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164595--1-zsc2z:

    [previous terminated container "collect-profiles" in pod "collect-profiles-27164595--1-zsc2z" not found, container "collect-profiles" in pod "collect-profiles-27164595--1-zsc2z" is waiting to start: ContainerCreating], one or more errors ocurred while gathering container data for pod collect-profiles-27164610--1-grr9b:

    [container "collect-profiles" in pod "collect-profiles-27164610--1-grr9b" is waiting to start: ContainerCreating, previous terminated container "collect-profiles" in pod "collect-profiles-27164610--1-grr9b" not found], one or more errors ocurred while gathering container data for pod package-server-manager-77f8d8b554-j5g6j:

    [container "package-server-manager" in pod "package-server-manager-77f8d8b554-j5g6j" is waiting to start: ContainerCreating, previous terminated container "package-server-manager" in pod "package-server-manager-77f8d8b554-j5g6j" not found]], skipping gathering namespaces/openshift-service-ca-operator due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-service-ca-operator

    one or more errors ocurred while gathering container data for pod service-ca-operator-7667c9f4df-cmfhg:

    [container "service-ca-operator" in pod "service-ca-operator-7667c9f4df-cmfhg" is waiting to start: ContainerCreating, previous terminated container "service-ca-operator" in pod "service-ca-operator-7667c9f4df-cmfhg" not found], skipping gathering namespaces/openshift-service-ca due to error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-service-ca

    one or more errors ocurred while gathering container data for pod service-ca-997679556-dbr8j:

    [previous terminated container "service-ca-controller" in pod "service-ca-997679556-dbr8j" not found, container "service-ca-controller" in pod "service-ca-997679556-dbr8j" is waiting to start: ContainerCreating], skipping gathering namespaces/openshift-manila-csi-driver due to error: namespaces "openshift-manila-csi-driver" not found]error: gather did not start for pod must-gather-rd462: timed out waiting for the condition

Comment 11 Prashanth Sundararaman 2021-08-25 16:15:27 UTC
oh actually what is filling up is tmpfs (/run) which comes from RAM. what is the memory usage of the system ? can we understand if there is sufficient memory on the system.

Comment 12 Prashanth Sundararaman 2021-08-25 16:47:13 UTC
but that being said, looking at the original comment /run is a tmpfs of 8G and almost 100% of it is used up. can we get more details on what is causing /run to fill up? what are the files/folders you see there?  For example from the previous comment on the contents of /run:

427M    /run/log/journal/8c700d6e302948dd8d2bda2d8db6bee4

can we get some snippet of this journal log to see what is causing it to be 427M ? i believe if we can find that maybe we can find the underlying cause.

Also that output only shows /run being around 450M whereas in the original comment it almost filled up 8G. can we please get the output when /run is actually filled up ?

Comment 13 Manoj Kumar 2021-08-28 01:31:19 UTC
What was filling up /run was a bunch of files in /run/crio/exec-pid-dir.  Each file is 64k in size, even though each has 6-7 bytes with the PID.  When the number of files get to 125k, the 8G file-system fills up.

The number of files in /run/crio/exec-pid-dir is growing at a rate of ~34/minute, which means that an idle cluster in about 2+ days would fill up /run.

[root@rdr-cicd-e6b7-mon01-master-1 exec-pid-dir]# while true; do ls |wc; sleep 60; done
   7998    7998  807798
   8034    8034  811434
   8070    8070  815070

The processes seem to be related to a collect-profiles cronjob.  

[root@rdr-cicd-e6b7-mon01-bastion-0 ~]# oc get all
NAME                                          READY   STATUS        RESTARTS   AGE
pod/catalog-operator-7d866fcb7b-7748m         1/1     Running       38         5d
pod/collect-profiles-27161040--1-sj9z8        0/1     Terminating   0          5d5h
pod/collect-profiles-27163485--1-46dbd        0/1     Terminating   0          3d12h
pod/collect-profiles-27163605--1-swvbl        0/1     Terminating   0          3d10h
pod/collect-profiles-27164040--1-qsbmh        0/1     Terminating   0          3d3h
pod/collect-profiles-27164190--1-w4f2x        0/1     Terminating   0          3d
pod/collect-profiles-27164220--1-hf56p        0/1     Terminating   0          3d
pod/collect-profiles-27165030--1-m46zq        0/1     Terminating   0          2d10h
pod/collect-profiles-27165075--1-g4l6j        0/1     Terminating   0          2d10h
pod/collect-profiles-27166365--1-wdfvx        0/1     Terminating   0          36h
pod/collect-profiles-27166455--1-dc7cg        0/1     Terminating   0          35h
pod/collect-profiles-27166890--1-8ctrb        0/1     Terminating   0          3h56m
pod/collect-profiles-27167340--1-9dq7x        0/1     Terminating   0          20h
pod/collect-profiles-27167490--1-b6xd8        0/1     Terminating   0          17h
pod/collect-profiles-27167715--1-7bwfq        0/1     Terminating   0          14h
pod/collect-profiles-27168330--1-2cqqb        0/1     Error         0          168m
pod/collect-profiles-27168330--1-6djvf        0/1     Error         0          164m
pod/collect-profiles-27168330--1-9kng2        0/1     Error         0          3h1m
pod/collect-profiles-27168330--1-c8xwq        0/1     Error         0          3h4m
pod/collect-profiles-27168330--1-gczht        0/1     Error         0          171m
pod/collect-profiles-27168330--1-rf4j6        0/1     Error         0          3h12m
pod/collect-profiles-27168330--1-sfmhk        0/1     Error         0          175m
pod/collect-profiles-27168525--1-6cfjp        0/1     Completed     0          44m
pod/collect-profiles-27168540--1-kkzc6        0/1     Completed     0          29m
pod/collect-profiles-27168555--1-pgrdq        0/1     Completed     0          14m
pod/olm-operator-7f89d88f77-c6gzz             1/1     Running       38         5d
pod/package-server-manager-77f8d8b554-v7pkn   1/1     Running       0          7h21m
pod/packageserver-958484549-9dnkz             1/1     Running       38         5d
pod/packageserver-958484549-nngpv             1/1     Running       41         5d

NAME                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/catalog-operator-metrics   ClusterIP   172.30.85.212    <none>        8443/TCP   7d10h
service/olm-operator-metrics       ClusterIP   172.30.204.130   <none>        8443/TCP   7d10h
service/packageserver-service      ClusterIP   172.30.112.254   <none>        5443/TCP   37h

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/catalog-operator         1/1     1            1           7d10h
deployment.apps/olm-operator             1/1     1            1           7d10h
deployment.apps/package-server-manager   1/1     1            1           7d10h
deployment.apps/packageserver            2/2     2            2           7d10h

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/catalog-operator-7d866fcb7b         1         1         1       7d10h
replicaset.apps/olm-operator-7f89d88f77             1         1         1       7d10h
replicaset.apps/package-server-manager-77f8d8b554   1         1         1       7d10h
replicaset.apps/packageserver-958484549             2         2         2       7d10h

NAME                             SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/collect-profiles   */15 * * * *   False     0        14m             7d10h

NAME                                  COMPLETIONS   DURATION   AGE
job.batch/collect-profiles-27168330   0/1           3h59m      3h59m
job.batch/collect-profiles-27168525   1/1           15s        44m
job.batch/collect-profiles-27168540   1/1           12s        29m
job.batch/collect-profiles-27168555   1/1           10s        14m
[root@rdr-cicd-e6b7-mon01-bastion-0 ~]#

Comment 14 Manoj Kumar 2021-08-29 22:40:16 UTC
I did some more digging.  Found a tool to snoop on exec().  https://github.com/iovisor/bcc/blob/master/tools/execsnoop.py

With a compiled version of the tool, I was able to correlate the new processes to the contents of /run/crio/exec-pid-dir:

https://github.com/iovisor/bcc/blob/master/tools/execsnoop.py

[root@rdr-cicd-e6b7-mon01-master-1 execsnoop]# ./execsnoop
In file included from <built-in>:2:
In file included from /virtual/include/bcc/bpf.h:12:
In file included from include/linux/types.h:6:
In file included from include/uapi/linux/types.h:14:
In file included from include/uapi/linux/posix_types.h:5:
In file included from include/linux/stddef.h:5:
In file included from include/uapi/linux/stddef.h:2:
In file included from include/linux/compiler_types.h:74:
include/linux/compiler-clang.h:25:9: warning: '__no_sanitize_address' macro redefined [-Wmacro-redefined]
#define __no_sanitize_address
        ^
include/linux/compiler-gcc.h:213:9: note: previous definition is here
#define __no_sanitize_address __attribute__((no_sanitize_address))
        ^
1 warning generated.
PCOMM            PID    PPID   RET ARGS
ldd              3733034 2553     0 /usr/bin/ldd /usr/bin/crio
ld64.so.2        3733035 3733034   0 /lib64/ld64.so.2 --verify /usr/bin/crio
ld64.so.2        3733038 3733037   0 /lib64/ld64.so.2 /usr/bin/crio
sh               3733039 5068     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733039 5068     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
sh               3733040 5068     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733040 5068     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
sh               3733041 5068     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733041 5068     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
sh               3733042 5313     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733042 5313     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
sh               3733043 5313     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733043 5313     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
sh               3733044 5313     0   /usr/bin/awk -F = '/partition_id/ { print $2 }' /proc/ppc64/lparcfg
awk              3733044 5313     0 /usr/bin/awk -F = /partition_id/ { print $2 } /proc/ppc64/lparcfg
md5sum           3733046 3733045   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733047 3733045   0 /usr/bin/awk {print $1}
md5sum           3733049 3733048   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733050 3733048   0 /usr/bin/awk {print $1}
sleep            3733051 3709     0 /usr/bin/sleep 1
md5sum           3733053 3733052   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733054 3733052   0 /usr/bin/awk {print $1}
md5sum           3733056 3733055   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733057 3733055   0 /usr/bin/awk {print $1}
sleep            3733058 3709     0 /usr/bin/sleep 1
runc             3733059 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/1b2a759d965ccb9678cbae7b0dafde9537667d062555291284a1f5d6a2312fe36d95a00c-82c2-4fbe-b015-b2ab89cf3303 --process /tmp/exec-process-074617211 1b2a759d965ccb9678cbae7b0dafde9537667d062555291284a1f5d6a2312fe3
exe              3733068 3733059   0 /proc/self/exe init
test             3733070 3733059   0 /usr/bin/test -f /etc/cni/net.d/80-openshift-network.conf
md5sum           3733077 3733076   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733078 3733076   0 /usr/bin/awk {print $1}
md5sum           3733080 3733079   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733081 3733079   0 /usr/bin/awk {print $1}
sleep            3733082 3709     0 /usr/bin/sleep 1
runc             3733083 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/afb03a1f9dbb802fc1d1440388430462cfb60231b060aa3ebcbc249ace50931519341ee0-e912-4070-a4f8-14d9196f1352 --process /tmp/exec-process-197278878 afb03a1f9dbb802fc1d1440388430462cfb60231b060aa3ebcbc249ace509315
exe              3733093 3733083   0 /proc/self/exe init
bash             3733095 3733083   0 /bin/bash -c set -xe\n\n# Unix sockets are used for health checks to ensure that the pod is reporting readiness of the etcd process\n# in this c
etcdctl          3733101 3733095   0 /usr/bin/etcdctl --command-timeout=2s --dial-timeout=2s --endpoints=unixs://193.168.200.231:0 endpoint health -w json
grep             3733102 3733095   0 /usr/bin/grep "health":true
awk              3733112 3733110   0 /usr/bin/awk {print $1}
md5sum           3733111 3733110   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
md5sum           3733114 3733113   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733115 3733113   0 /usr/bin/awk {print $1}
sleep            3733116 3709     0 /usr/bin/sleep 1
runc             3733117 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/612b6a23ee3b71174d42a96117c527fef4e4d3f5800a1257216af2fb249f4664df311a28-07fb-4c11-9bb9-9ccdf005adc2 --process /tmp/exec-process-392734080 612b6a23ee3b71174d42a96117c527fef4e4d3f5800a1257216af2fb249f4664
exe              3733126 3733117   0 /proc/self/exe init
sh               3733131 3733117   0 /bin/sh -c declare -r health_endpoint="https://localhost:2379/health"\ndeclare -r cert="/var/run/secrets/etcd-client/tls.crt"\ndeclare -r key
grep             3733138 3733131   0 /usr/bin/grep "health":"true"
curl             3733137 3733131   0 /usr/bin/curl --max-time 2 --silent --cert /var/run/secrets/etcd-client/tls.crt --key /var/run/secrets/etcd-client/tls.key --cacert /var/run/configmaps/etcd-ca/ca-bundle.crt https://localhost:2379/health
md5sum           3733141 3733140   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733142 3733140   0 /usr/bin/awk {print $1}
md5sum           3733144 3733143   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733145 3733143   0 /usr/bin/awk {print $1}
sleep            3733146 3709     0 /usr/bin/sleep 1
md5sum           3733148 3733147   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733149 3733147   0 /usr/bin/awk {print $1}
md5sum           3733151 3733150   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733152 3733150   0 /usr/bin/awk {print $1}
sleep            3733153 3709     0 /usr/bin/sleep 1
md5sum           3733155 3733154   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733156 3733154   0 /usr/bin/awk {print $1}
md5sum           3733158 3733157   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733159 3733157   0 /usr/bin/awk {print $1}
sleep            3733160 3709     0 /usr/bin/sleep 1
runc             3733161 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/1b2a759d965ccb9678cbae7b0dafde9537667d062555291284a1f5d6a2312fe37b92324c-0714-46c1-b517-3ab9786ca397 --process /tmp/exec-process-050607721 1b2a759d965ccb9678cbae7b0dafde9537667d062555291284a1f5d6a2312fe3
exe              3733169 3733161   0 /proc/self/exe init
test             3733173 3733161   0 /usr/bin/test -f /etc/cni/net.d/80-openshift-network.conf
md5sum           3733180 3733179   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
awk              3733181 3733179   0 /usr/bin/awk {print $1}
md5sum           3733183 3733182   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733184 3733182   0 /usr/bin/awk {print $1}
sleep            3733185 3709     0 /usr/bin/sleep 1
runc             3733186 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/afb03a1f9dbb802fc1d1440388430462cfb60231b060aa3ebcbc249ace5093158e423623-9b1d-45d4-b3f3-63eea92c797b --process /tmp/exec-process-598901766 afb03a1f9dbb802fc1d1440388430462cfb60231b060aa3ebcbc249ace509315
exe              3733195 3733186   0 /proc/self/exe init
bash             3733197 3733186   0 /bin/bash -c set -xe\n\n# Unix sockets are used for health checks to ensure that the pod is reporting readiness of the etcd process\n# in this c
etcdctl          3733203 3733197   0 /usr/bin/etcdctl --command-timeout=2s --dial-timeout=2s --endpoints=unixs://193.168.200.231:0 endpoint health -w json
grep             3733204 3733197   0 /usr/bin/grep "health":true
awk              3733213 3733211   0 /usr/bin/awk {print $1}
md5sum           3733212 3733211   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/token
md5sum           3733215 3733214   0 /usr/bin/md5sum /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
awk              3733216 3733214   0 /usr/bin/awk {print $1}
sleep            3733217 3709     0 /usr/bin/sleep 1
runc             3733218 2553     0 /usr/bin/runc --root /run/runc exec --pid-file /var/run/crio/exec-pid-dir/612b6a23ee3b71174d42a96117c527fef4e4d3f5800a1257216af2fb249f4664fd437fe6-f5b3-4f39-86b7-efb35ddcaa25 --process /tmp/exec-process-131250605 612b6a23ee3b71174d42a96117c527fef4e4d3f5800a1257216af2fb249f4664
exe              3733226 3733218   0 /proc/self/exe init
sh               3733230 3733218   0 /bin/sh -c declare -r health_endpoint="https://localhost:2379/health"\ndeclare -r cert="/var/run/secrets/etcd-client/tls.crt"\ndeclare -r key
grep             3733238 3733230   0 /usr/bin/grep "health":"true"
curl             3733237 3733230   0 /usr/bin/curl --max-time 2 --silent --cert /var/run/secrets/etcd-client/tls.crt --key /var/run/secrets/etcd-client/tls.key --cacert /var/run/configmaps/etcd-ca/ca-bundle.crt https://localhost:2379/health
^CTraceback (most recent call last):
  File "execsnoop.py", line 305, in <module>
  File "bcc/__init__.py", line 1445, in perf_buffer_poll
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "execsnoop.py", line 307, in <module>
NameError: name 'exit' is not defined
[3732956] Failed to execute script 'execsnoop' due to unhandled exception!
[root@rdr-cicd-e6b7-mon01-master-1 execsnoop]# for i in `ls -t /run/crio/exec-pid-dir|head `; do cat /run/crio/exec-pid-dir/$i; echo ' '; done
3733230 
3733197 
3733173 
3733131 
3733095 
3733070 
3733017 
3732984 
3732958 
3732896

Comment 15 Manoj Kumar 2021-08-29 23:27:33 UTC
The 3 processes seem to be related to these pods:

[root@rdr-cicd-e6b7-mon01-master-1 containers]# ls *1b2a759*
sdn-jfkcp_openshift-sdn_sdn-1b2a759d965ccb9678cbae7b0dafde9537667d062555291284a1f5d6a2312fe3.log

[root@rdr-cicd-e6b7-mon01-master-1 containers]# ls *afb03*
etcd-rdr-cicd-e6b7-mon01-master-1_openshift-etcd_etcd-afb03a1f9dbb802fc1d1440388430462cfb60231b060aa3ebcbc249ace509315.log

[root@rdr-cicd-e6b7-mon01-master-1 containers]# ls -al *612b6*
lrwxrwxrwx. 1 root root 112 Aug 27 21:41 etcd-quorum-guard-59664ffdd8-dwrvk_openshift-etcd_guard-612b6a23ee3b71174d42a96117c527fef4e4d3f5800a1257216af2fb249f4664.log -> /var/log/pods/openshift-etcd_etcd-quorum-guard-59664ffdd8-dwrvk_04dd2823-7bbd-450f-9c70-8e5fd3472e80/guard/1.log

Comment 16 Alisha 2021-08-30 06:35:13 UTC
A new cluster was setup with the latest build ( using same config ).
Checked the health of the cluster after 2 and half days.
The same issue reported above is seen.

# oc version
Client Version: 4.9.0-0.nightly-ppc64le-2021-08-26-032145
Server Version: 4.9.0-0.nightly-ppc64le-2021-08-26-032145
Kubernetes Version: v1.22.0-rc.0+33d0ffa

[root@rdr-cicd-4dc3-mon01-bastion-0 ~]# du -ha /run | sort -n -r | head -n 10
450M    /run
448K    /run/cloud-init
443M    /run/log/journal
443M    /run/log
427M    /run/log/journal/e294339a289840009764dff6139c67e9
320K    /run/NetworkManager
192K    /run/systemd/generator
192K    /run/NetworkManager/devices
128K    /run/rhsm
128K    /run/named

Checked the file inside /run/log/journal/e294339a289840009764dff6139c67e9

   journalctl --file  system

error seen in the above file is : 

Aug 30 01:00:23 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: checked 2 paths in 0.001672 secs
Aug 30 01:00:23 rdr-cicd-4dc3-mon01-bastion-0.redhat.com sshd[195192]: error: kex_exchange_identification: Connection closed by remote host
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: tick (1.001697 secs)
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: open '/sys/devices/vio/30000008/host2/rport-2:0-0/target2:0:0/2:0:0:1/state'
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: sdj: path state = running
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: 8:144 : tur checker starting up
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: 8:144 : tur checker finished, state up
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: sdj: tur state = up
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: mpathb: update_multipath_strings
Aug 30 01:00:24 rdr-cicd-4dc3-mon01-bastion-0.redhat.com multipathd[22218]: libdevmapper: ioctl/libdm-iface.c(1898): dm table mpathb  [ noopencount flush ]   [16384] (*1)

Comment 17 Alisha 2021-08-30 07:39:04 UTC
Created attachment 1818991 [details]
Content of .journal file

Attached file has the output of : 

journalctl --file  system

mentioned in comment 16.

Comment 18 Manoj Kumar 2021-08-30 13:08:10 UTC
This is being reported with 4.8.5 as well.  i.e. Potential to be hit by customers who upgrade to the most recent release.

Comment 19 Manoj Kumar 2021-08-30 16:48:09 UTC
@prashanth found that this issue was introduced by
https://github.com/cri-o/cri-o/pull/5136 

And it is fixed/reverted by:
https://github.com/cri-o/cri-o/pull/5245
https://github.com/cri-o/cri-o/pull/5262

Comment 20 W. Trevor King 2021-08-31 20:59:25 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z.  The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way.  Sample answers are provided to give more context and the ImpactStatementRequested label has been added to this bug.  When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label.  The expectation is that the assignee answers these questions.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
* example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact?  Is it serious enough to warrant blocking edges?
* example: Up to 2 minute disruption in edge routing
* example: Up to 90 seconds of API downtime
* example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* example: Issue resolves itself after five minutes
* example: Admin uses oc to fix things
* example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* example: No, it has always been like this we just never noticed
* example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 21 W. Trevor King 2021-08-31 21:06:51 UTC
Straight from NEW -> VERIFIED like we had here after comment 19 skips some steps.  Moving back to MODIFIED, so ART's Elliot sweeper can associate with an errata.

Comment 22 W. Trevor King 2021-08-31 21:07:21 UTC
Also setting a priority, while I'm tidying bug state.

Comment 23 Dan Li 2021-08-31 21:08:54 UTC
Also adding Needinfo for Manoj and Prashanth as they are closer to this bug regarding Comment 20

Comment 24 Manoj Kumar 2021-08-31 21:21:03 UTC
Reposting Trevor's summary.

Who is impacted?  If we have to block upgrade edges based on this issue, which edges would need blocking?
* Updates from earlier releases into the 4.7.(>=24) or 4.8.(5 <= z <= 9)

What is the impact?  Is it serious enough to warrant blocking edges?
* /run could fill up because of directories created for exec probes.  How soon it could fill up depends on how many and how frequently the probes are running.  /run filling up leads to Node running out of memory and going Not Ready.

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?
* Reboot affected nodes as needed to keep the cluster healthy until you can update to a fixed release.

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?
* Yes:
* 4.6.43 is unaffected, next 4.6.z to ship will be unless we fix things.
* 4.7.24 - 4.7.28 include the regression.
* 4.8.5 - 4.8.9 include the regression.

Comment 25 W. Trevor King 2021-08-31 22:58:14 UTC
Based on the impact statement in comment 24, we've landed [1], blocking some edges into 4.7.24, 4.7.28, 4.8.5, and 4.8.9 to protect supported clusters from the CRI-O /run leak.  The edges blocked protect supported releases from transitioning into the impacted releases (e.g. no 4.7.23 -> 4.8.9).  We continue to recommend updates between impacted releases (e.g. 4.7.24 -> 4.8.9), because moving from one impacted release to another impacted release does not make things worse (in fact, the node reboots may help make things temporarily better).  We do not manage edges that only exist in candidate channels, so those mostly stayed in place as well.

[1]: https://github.com/openshift/cincinnati-graph-data/pull/1022

Comment 28 Mark Hamzy 2021-09-02 18:59:47 UTC
api.libvirt-ppc64le-0-0 - 4.6

[core@libvirt-ppc64le-0-0-7-4frf7-master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.6
[core@libvirt-ppc64le-0-0-7-4frf7-master-0 ~]$ sudo find / -name exec-pid-dir 2>/dev/null
[core@libvirt-ppc64le-0-0-7-4frf7-master-0 ~]$

api.libvirt-ppc64le-2-1 - 4.7

[core@libvirt-ppc64le-2-1-b-67wm2-master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.7
[core@libvirt-ppc64le-2-1-b-67wm2-master-0 ~]$ sudo find / -name exec-pid-dir 2>/dev/null
[core@libvirt-ppc64le-2-1-b-67wm2-master-0 ~]$

api.libvirt-ppc64le-0-3 - 4.8

[core@libvirt-ppc64le-0-3-0-ql4z8-master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.8
[core@libvirt-ppc64le-0-3-0-ql4z8-master-0 ~]$ sudo find / -name exec-pid-dir 2>/dev/null
/run/crio/exec-pid-dir
[core@libvirt-ppc64le-0-3-0-ql4z8-master-0 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:19 .
drwxr-xr-x. 4 root root 4780 Sep  2 18:49 ..

[core@libvirt-ppc64le-0-3-0-ql4z8-master-1 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:19 .
drwxr-xr-x. 4 root root 5060 Sep  2 18:51 ..

[core@libvirt-ppc64le-0-3-0-ql4z8-master-2 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:19 .
drwxr-xr-x. 4 root root 4460 Sep  2 18:51 ..

api.libvirt-ppc64le-1-0 - 4.9

[core@libvirt-ppc64le-1-0-6-4fh7c-master-0 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux CoreOS release 4.9
[core@libvirt-ppc64le-1-0-6-4fh7c-master-0 ~]$ sudo find / -name exec-pid-dir 2>/dev/null
/run/crio/exec-pid-dir
[core@libvirt-ppc64le-1-0-6-4fh7c-master-0 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:19 .
drwxr-xr-x. 4 root root 3820 Sep  2 18:41 ..

[core@libvirt-ppc64le-1-0-6-4fh7c-master-1 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:18 .
drwxr-xr-x. 4 root root 3080 Sep  2 18:41 ..

[core@libvirt-ppc64le-1-0-6-4fh7c-master-2 ~]$ sudo ls -la /run/crio/exec-pid-dir
total 0
drwxr-x---. 2 root root   40 Sep  2 18:18 .
drwxr-xr-x. 4 root root 2760 Sep  2 18:42 ..

Comment 30 Scott Dodson 2021-09-08 17:49:19 UTC
A workaround for anyone who cannot upgrade is documented in https://access.redhat.com/solutions/6304881

Comment 31 sven.thoms 2021-09-14 07:07:16 UTC
We updated from 4.7.2 to 4.7.26 using the candidate channel and then (right away) from
4.7.26 to 4.8.7 using the candidate channel. Never had any issues upgrading except one worker node needed to be restarted manually to proceed with the machine set upgrade.

Next, we went from 4.8.7 to 4.8.9. Currently, we are on 4.8.10. 

What is the status of this bug in 4.8.10?

Comment 32 sven.thoms 2021-09-14 10:00:45 UTC
Also, even though we had 4.7.26 running for two weeks and 4.8.9 for two days, we do not see the problem on any of our (AWS) Nodes

e.g.

sh-4.4# chroot /host

sh-4.4#  ls -l /run/crio/exec-pid-dir

total 0

sh-4.4# exit


Seems good. So for me, it is really hard to ascertain how universal this problem is.

Comment 33 W. Trevor King 2021-09-14 19:07:38 UTC
(In reply to sven.thoms from comment #31)
> What is the status of this bug in 4.8.10?

4.8-specific discussion should go in the 4.8.z bug (which shipped in 4.8.10 [1]).

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1999645#c13

Comment 35 dnastaci 2021-09-22 17:04:03 UTC
For whatever it is worth, we are seeing the problem on OCP 4.6.43 running on AWS.


ls -la /run/crio/exec-pid-dir | wc -l
7824683

df -ki /run/crio
Filesystem      Inodes   IUsed  IFree IUse% Mounted on
tmpfs          8142826 7856422 286404   97% /run

crio --version
crio version 1.19.3-9.rhaos4.6.gitc8a7d88.el8
Version:    1.19.3-9.rhaos4.6.gitc8a7d88.el8
GoVersion:  go1.15.14
Compiler:   gc
Platform:   linux/amd64
Linkmode:   dynamic

Comment 36 Peter Hunt 2021-09-22 18:30:52 UTC
uh oh, it seems we were incorrect about impact. 4.6.43 does indeed have this problem. However, I have double checked that 4.6.44 has the fix

Comment 37 W. Trevor King 2021-09-22 18:58:21 UTC
We've extended the graph-data blocks from comment 25 to include * -> 4.6.43 [1] based on comment 35 and comment 36.  Thanks for the report :)

[1]: https://github.com/openshift/cincinnati-graph-data/pull/1085

Comment 40 errata-xmlrpc 2021-10-18 17:48:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759


Note You need to log in before you can comment on or make changes to this bug.