Bug 2049488 - Editing kube-scheduler static files to set feature gate LocalStorageCapacityIsolation=False does not work
Summary: Editing kube-scheduler static files to set feature gate LocalStorageCapacityI...
Keywords:
Status: CLOSED DUPLICATE of bug 2048756
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Ryan Phillips
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-02 10:40 UTC by RamaKasturi
Modified: 2022-02-03 14:21 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-02-03 14:21:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description RamaKasturi 2022-02-02 10:40:16 UTC
Description of problem:
I see that kube-scheduler pod does not come up after editing kube-scheduler static files to set feature gate LocalStorageCapacityIsolation=False

Version-Release number of selected component (if applicable):
[knarra@knarra ~]$ oc get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-fc.4   True        False         167m    Cluster version is 4.10.0-fc.4
[knarra@knarra ~]$ 


How reproducible:
Always

Steps to Reproduce:
1. Install latest 4.10 cluster
2. Scale down CVO, KSO
3. Run command "oc debug node/master"
4. cat /etc/kubernetes/manifests/kube-scheduler.yaml
5. Edit the file and add feature gate as below
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"6","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-logs-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-6"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36ce0d508dffc6b4b0b6405a00626cf5b113b2d47374fb0ae96be64440c33bb6","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 and :10251 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10251 or sport = 10259 )')\" ]; do\n  echo -n \".\"\n  sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36ce0d508dffc6b4b0b6405a00626cf5b113b2d47374fb0ae96be64440c33bb6","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=APIPriorityAndFairness=true,CSIMigrationAWS=false,CSIMigrationAzureDisk=false,CSIMigrationAzureFile=false,CSIMigrationGCE=false,CSIMigrationOpenStack=false,CSIMigrationvSphere=false,DownwardAPIHugePages=true,PodSecurity=true,RotateKubeletServerCertificate=true,LocalStorageCapacityIsolation=False","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:965ddb22dc2f3ce6fd48efbc8708f11fbca804a5e6be95dd2c46b229c02d7f28","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:965ddb22dc2f3ce6fd48efbc8708f11fbca804a5e6be95dd2c46b229c02d7f28","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig  --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}}
6. save the file
7. Exit the debug node

Actual results:
I see that the kube-scheduler pod is not started at all.
[knarra@knarra ~]$ oc get pods -n openshift-kube-scheduler
NAME                                                          READY   STATUS      RESTARTS   AGE
installer-3-knarra-ibmfc4-wvccs-master-0                      0/1     Completed   0          3h8m
installer-4-knarra-ibmfc4-wvccs-master-0                      0/1     Completed   0          3h7m
installer-5-knarra-ibmfc4-wvccs-master-0                      0/1     Completed   0          3h6m
installer-5-knarra-ibmfc4-wvccs-master-2                      0/1     Completed   0          3h5m
installer-6-knarra-ibmfc4-wvccs-master-0                      0/1     Completed   0          3h1m
installer-6-knarra-ibmfc4-wvccs-master-1                      0/1     Completed   0          3h3m
installer-6-knarra-ibmfc4-wvccs-master-2                      0/1     Completed   0          3h4m
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-0   0/1     Running     0          3h8m
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-1   1/1     Running     0          3h2m
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-2   1/1     Running     0          3h4m
openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-1         3/3     Running     0          3h3m
openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-2         3/3     Running     0          3h4m
revision-pruner-6-knarra-ibmfc4-wvccs-master-0                0/1     Completed   0          3h
revision-pruner-6-knarra-ibmfc4-wvccs-master-1                0/1     Completed   0          3h
revision-pruner-6-knarra-ibmfc4-wvccs-master-2                0/1     Completed   0          3h


Expected results:
kube-scheduler pod should be started and be in running state.

Additional info:

Comment 1 RamaKasturi 2022-02-02 10:44:29 UTC
Must-gather, kubelet & crio logs can be found in the link below.

http://virt-openshift-05.lab.eng.nay.redhat.com/knarra/2049488/

Comment 2 Maciej Szulik 2022-02-02 11:55:05 UTC
There's nothing in the logs that would express this, but after careful examination you have a typo in the feature flag:

LocalStorageCapacityIsolation=False

should be

LocalStorageCapacityIsolation=false

After performing such modification next time, it's best to verify logs from the failing container.

Comment 3 RamaKasturi 2022-02-02 12:25:31 UTC
@maciej i do not think there is an issue with the flag there , i modified the flag as suggested by you but still hit similar issue where one of the kube-scheduler pod does not come up. I had a discussion about this with jan and link at [1] contains our conversation. Re-opening the bug again.

{"kind":"Pod","apiVersion":"v1","metadata":{"name":"openshift-kube-scheduler","namespace":"openshift-kube-scheduler","creationTimestamp":null,"labels":{"app":"openshift-kube-scheduler","revision":"6","scheduler":"true"},"annotations":{"kubectl.kubernetes.io/default-logs-container":"kube-scheduler","target.workload.openshift.io/management":"{\"effect\": \"PreferredDuringScheduling\"}"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-pod-6"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-scheduler-certs"}}],"initContainers":[{"name":"wait-for-host-port","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36ce0d508dffc6b4b0b6405a00626cf5b113b2d47374fb0ae96be64440c33bb6","command":["/usr/bin/timeout","30","/bin/bash","-c"],"args":["echo -n \"Waiting for port :10259 and :10251 to be released.\"\nwhile [ -n \"$(ss -Htan '( sport = 10251 or sport = 10259 )')\" ]; do\n  echo -n \".\"\n  sleep 1\ndone\n"],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"containers":[{"name":"kube-scheduler","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36ce0d508dffc6b4b0b6405a00626cf5b113b2d47374fb0ae96be64440c33bb6","command":["hyperkube","kube-scheduler"],"args":["--config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml","--cert-dir=/var/run/kubernetes","--authentication-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--authorization-kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig","--feature-gates=APIPriorityAndFairness=true,CSIMigrationAWS=false,CSIMigrationAzureDisk=false,CSIMigrationAzureFile=false,CSIMigrationGCE=false,CSIMigrationOpenStack=false,CSIMigrationvSphere=false,DownwardAPIHugePages=true,PodSecurity=true,RotateKubeletServerCertificate=true,LocalStorageCapacityIsolation=false","-v=2","--tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt","--tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key","--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256","--tls-min-version=VersionTLS12"],"ports":[{"containerPort":10259}],"resources":{"requests":{"cpu":"15m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"livenessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"readinessProbe":{"httpGet":{"path":"healthz","port":10259,"scheme":"HTTPS"},"initialDelaySeconds":45},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-cert-syncer","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:965ddb22dc2f3ce6fd48efbc8708f11fbca804a5e6be95dd2c46b229c02d7f28","command":["cluster-kube-scheduler-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-scheduler-recovery-controller","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:965ddb22dc2f3ce6fd48efbc8708f11fbca804a5e6be95dd2c46b229c02d7f28","command":["/bin/bash","-euxo","pipefail","-c"],"args":["timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \"$(ss -Htanop \\( sport = 11443 \\))\" ]; do sleep 1; done'\n\nexec cluster-kube-scheduler-operator cert-recovery-controller --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-scheduler-cert-syncer-kubeconfig/kubeconfig  --namespace=${POD_NAMESPACE} --listen=0.0.0.0:11443 -v=2\n"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"5m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}}

[1] https://coreos.slack.com/archives/GK58XC2G2/p1643792772428049

Comment 4 RamaKasturi 2022-02-02 12:26:19 UTC
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-0   0/1     Running     0          4h54m
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-1   1/1     Running     0          4h48m
openshift-kube-scheduler-guard-knarra-ibmfc4-wvccs-master-2   1/1     Running     0          4h50m
openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-1         3/3     Running     0          4h49m
openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-2         3/3     Running     0          4h50m

Comment 5 Jan Chaloupka 2022-02-02 12:51:57 UTC
From crio logs from master-0:
```
Feb 02 08:49:23.612303 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:23.612253573Z" level=info msg="Stopped container 89cca9c5c8068df6f86d1f8bbcb8dfe173857923db85d9af0dc258c0c40394d1: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler" id=1bb1727b-5773-4353-872b-128409db942b name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:23.660568 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:23.660526262Z" level=info msg="Stopped container 6baa6bd97fe6ef5163e9cf50c83c54b0104fbf2a08fa1a33ea0dedd412103eb4: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-cert-syncer" id=59def706-3927-4140-ae94-1b11986eab53 name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:23.689159 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:23.689112123Z" level=info msg="Stopped container 9ee4e16af45905237f34e53b4c51f76448c257542d8ef61c2cc5fe46c5a9949f: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-recovery-controller" id=361f7902-ed30-43ea-ab12-f333d9951e98 name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:24.618383 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:24.618343246Z" level=info msg="Stopped container 6baa6bd97fe6ef5163e9cf50c83c54b0104fbf2a08fa1a33ea0dedd412103eb4: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-cert-syncer" id=c90bbc07-33b5-4633-9a27-98de7343cb27 name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:24.618822 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:24.618791306Z" level=info msg="Stopped container 9ee4e16af45905237f34e53b4c51f76448c257542d8ef61c2cc5fe46c5a9949f: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-recovery-controller" id=4d277282-5118-4f47-b831-434c77ee4b26 name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:24.619002 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:24.618977184Z" level=info msg="Stopped container 89cca9c5c8068df6f86d1f8bbcb8dfe173857923db85d9af0dc258c0c40394d1: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler" id=3097f474-001e-409c-8db5-334f475bb238 name=/runtime.v1.RuntimeService/StopContainer
Feb 02 08:49:30.763920 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:30.763883563Z" level=info msg="Removed container 8b5c633dbfbeec142728312186ce78a1f2fa37fa3a666a3dd5626a86825e9502: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/wait-for-host-port" id=f3869fcb-f3f7-45cd-ba87-9cfd9aaac59c name=/runtime.v1.RuntimeService/RemoveContainer
Feb 02 08:49:30.787020 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:30.786982162Z" level=info msg="Removed container 9ee4e16af45905237f34e53b4c51f76448c257542d8ef61c2cc5fe46c5a9949f: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-recovery-controller" id=1439c538-1ef1-4c78-9bdd-783c760d86a4 name=/runtime.v1.RuntimeService/RemoveContainer
Feb 02 08:49:30.811304 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:30.811263854Z" level=info msg="Removed container 6baa6bd97fe6ef5163e9cf50c83c54b0104fbf2a08fa1a33ea0dedd412103eb4: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler-cert-syncer" id=059722cc-ed30-4d29-b54f-beb2e72c7bdd name=/runtime.v1.RuntimeService/RemoveContainer
Feb 02 08:49:30.832500 knarra-ibmfc4-wvccs-master-0 crio[1371]: time="2022-02-02 08:49:30.832461608Z" level=info msg="Removed container 89cca9c5c8068df6f86d1f8bbcb8dfe173857923db85d9af0dc258c0c40394d1: openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0/kube-scheduler" id=1d493c64-7aee-4dfb-8670-68b523f027c8 name=/runtime.v1.RuntimeService/RemoveContainer
```

From kubelet logs from master-0:
```
Feb 02 08:49:23.479611 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.479585    1402 kubelet.go:2096] "SyncLoop REMOVE" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:23.479923 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.479894    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler" containerID="cri-o://89cca9c5c8068df6f86d1f8bbcb8dfe173857923db85d9af0dc258c0c40394d1" gracePeriod=30
Feb 02 08:49:23.480111 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.480044    1402 kubelet.go:2086] "SyncLoop ADD" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:23.480111 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.480076    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler-recovery-controller" containerID="cri-o://9ee4e16af45905237f34e53b4c51f76448c257542d8ef61c2cc5fe46c5a9949f" gracePeriod=30
Feb 02 08:49:23.480179 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.480163    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler-cert-syncer" containerID="cri-o://6baa6bd97fe6ef5163e9cf50c83c54b0104fbf2a08fa1a33ea0dedd412103eb4" gracePeriod=30
Feb 02 08:49:23.488956 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.488039    1402 kubelet.go:2096] "SyncLoop REMOVE" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:23.488956 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.488100    1402 kubelet.go:2086] "SyncLoop ADD" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:23.493357 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.493332    1402 kubelet.go:2096] "SyncLoop REMOVE" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:23.787307 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:23.787279    1402 logs.go:319] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0_baafed64d7a702145442297aaf1eff14/kube-scheduler-cert-syncer/0.log"
Feb 02 08:49:24.277713 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.277687    1402 logs.go:319] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0_baafed64d7a702145442297aaf1eff14/kube-scheduler-cert-syncer/0.log"
Feb 02 08:49:24.288030 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.287998    1402 status_manager.go:614] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" oldPodUID=baafed64d7a702145442297aaf1eff14 podUID=7fa3eab1c379518020396fe71165dff8
Feb 02 08:49:24.301556 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.301530    1402 status_manager.go:614] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" oldPodUID=baafed64d7a702145442297aaf1eff14 podUID=7fa3eab1c379518020396fe71165dff8
Feb 02 08:49:24.615422 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.615360    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler" containerID="cri-o://89cca9c5c8068df6f86d1f8bbcb8dfe173857923db85d9af0dc258c0c40394d1" gracePeriod=1
Feb 02 08:49:24.615740 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.615373    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler-cert-syncer" containerID="cri-o://6baa6bd97fe6ef5163e9cf50c83c54b0104fbf2a08fa1a33ea0dedd412103eb4" gracePeriod=1
Feb 02 08:49:24.616630 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.616603    1402 kuberuntime_container.go:720] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=baafed64d7a702145442297aaf1eff14 containerName="kube-scheduler-recovery-controller" containerID="cri-o://9ee4e16af45905237f34e53b4c51f76448c257542d8ef61c2cc5fe46c5a9949f" gracePeriod=1
Feb 02 08:49:24.619076 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.619058    1402 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0" podUID=
Feb 02 08:49:24.653604 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.653579    1402 kubelet.go:2102] "SyncLoop DELETE" source="api" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:24.666739 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:24.666712    1402 kubelet.go:2096] "SyncLoop REMOVE" source="api" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:26.464470 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.464444    1402 kubelet.go:2086] "SyncLoop ADD" source="file" pods=[openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0]
Feb 02 08:49:26.603676 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.603604    1402 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-resource-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
Feb 02 08:49:26.603676 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.603665    1402 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-cert-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
Feb 02 08:49:26.704126 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.704090    1402 reconciler.go:253] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-resource-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
Feb 02 08:49:26.704287 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.704178    1402 reconciler.go:253] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-cert-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
Feb 02 08:49:26.704287 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.704197    1402 operation_generator.go:755] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-resource-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
Feb 02 08:49:26.704287 knarra-ibmfc4-wvccs-master-0 hyperkube[1402]: I0202 08:49:26.704250    1402 operation_generator.go:755] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7fa3eab1c379518020396fe71165dff8-cert-dir\") pod \"openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0\" (UID: \"7fa3eab1c379518020396fe71165dff8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-knarra-ibmfc4-wvccs-master-0"
```

Despise the kube-scheduler static pod manifest located under /etc/kubernetes/manifests directory the pod's containers are not running.

Comment 6 Ryan Phillips 2022-02-03 14:21:48 UTC

*** This bug has been marked as a duplicate of bug 2048756 ***


Note You need to log in before you can comment on or make changes to this bug.