Bug 1913263 - [4.6] Unable to schedule a pod due to Insufficient ephemeral-storage
Summary: [4.6] Unable to schedule a pod due to Insufficient ephemeral-storage
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.6.z
Assignee: Jan Chaloupka
QA Contact: RamaKasturi
Depends On: 1886294
Blocks: 1913275
TreeView+ depends on / blocked
Reported: 2021-01-06 11:39 UTC by Jan Chaloupka
Modified: 2021-01-18 18:00 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: ephemeral-storage resource computation was not feature gated Consequence: ephemeral-storage resource was taken into even when the feature was disabled. Causing a pod to failed to be scheduled. Fix: feature gate ephemeral-storage resource computation when scheduling Result: ephemeral-storage resource is no longer taken into account when feature is disabled
Clone Of: 1886294
: 1913275 (view as bug list)
Last Closed: 2021-01-18 18:00:27 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:0037 0 None None None 2021-01-18 18:00:39 UTC

Comment 1 Jan Chaloupka 2021-01-06 12:10:53 UTC
Fixed by https://github.com/openshift/kubernetes/pull/435 (Update from Kubernetes 1.19.0 to 1.19.4  in 4.6). Related bug https://bugzilla.redhat.com/show_bug.cgi?id=1900630 (in CLOSED ERRATA state).

Moving to MODIFIED as the reported (and fixed) issue can be verified in 4.6.

Comment 3 RamaKasturi 2021-01-11 12:47:52 UTC
Verified with the build below and i see that it works as expected.

[knarra@knarra openshift-client-linux-4.6.0-0.nightly-2021-01-10-033123]$ ./oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2021-01-10-033123   True        False         38m     Cluster version is 4.6.0-0.nightly-2021-01-10-033123

Below are the steps used to verify the bug:
1) Install 4.6 cluster
2) Now add feature-gate LocalStorageCapacityIsolation=false

[knarra@knarra openshift-client-linux-4.6.0-0.nightly-2021-01-10-033123]$ ./oc get pod openshift-kube-scheduler-ip-10-0-144-39.us-east-2.compute.internal -o yaml -n openshift-kube-scheduler | grep feature-gate
    - --feature-gates=APIPriorityAndFairness=true,LegacyNodeRoleBehavior=false,NodeDisruptionExclusion=true,RotateKubeletServerCertificate=true,SCTPSupport=true,ServiceNodeExclusion=true,SupportPodPidsLimit=true,LocalStorageCapacityIsolation=false

3) Now create the pod using the yaml below
[knarra@knarra openshift-client-linux-4.6.0-0.nightly-2021-01-10-033123]$ cat /tmp/ephermal.yaml 
apiVersion: v1
kind: Pod
  name: nginx
    name: nginx
  - name: nginx
    image: quay.io/openshifttest/nginx@sha256:3936fb3946790d711a68c58be93628e43cbca72439079e16d154b5db216b58da
    schedulerName: default-scheduler
    - containerPort: 80
        ephemeral-storage: 4096M
        ephemeral-storage: 4096M
  - name: init-myservice
    image: quay.io/openshifttest/busybox@sha256:afe605d272837ce1732f390966166c2afff5391208ddd57de10942748694049d
    command: ['sh', '-c', "echo waiting for myservice; sleep 7;"]
        cpu: 500m
        ephemeral-storage: 2M
        memory: 1024M

[knarra@knarra openshift-client-linux-4.6.0-0.nightly-2021-01-10-033123]$ ./oc get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          55s   ip-10-0-189-49.us-east-2.compute.internal   <none>           <none>

Based on the above moving bug to verified state.

Comment 7 errata-xmlrpc 2021-01-18 18:00:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6.12 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.