Performed inline steps to verify the issue and checks against the test steps are passed.
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.11.0-0.nightly-2022-03-29-152521 True False 3h38m Cluster version is 4.11.0-0.nightly-2022-03-29-152521
Get the clusterip pattern & pods
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 4h5m
openshift ExternalName <none> kubernetes.default.svc.cluster.local <none> 4h
ruby-hello-world ClusterIP 172.30.240.32 <none> 8080/TCP 31s
$ oc get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ruby-hello-world-1-build 1/1 Running 0 43s 10.128.2.12 jmekkatt-fel-7676m-worker-northcentralus-5kpvm <none> <none>
Get the master nodes
$ oc get nodes | grep master
jmekkatt-fel-7676m-master-0 Ready master 4h3m v1.23.3+54654d2
jmekkatt-fel-7676m-master-1 Ready master 4h2m v1.23.3+54654d2
jmekkatt-fel-7676m-master-2 Ready master 4h3m v1.23.3+54654d2
Remove node to pod networking route entries from one of the master.
$ oc debug node/jmekkatt-fel-7676m-master-0
Starting pod/jmekkatt-fel-7676m-master-0-debug ...
sh-4.4# chroot /host
sh-4.4# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 100 0 0 eth0
10.0.0.0 0.0.0.0 255.255.128.0 U 100 0 0 eth0
10.128.0.0 0.0.0.0 255.252.0.0 U 0 0 0 tun0
168.63.129.16 10.0.0.1 255.255.255.255 UGH 100 0 0 eth0
169.254.169.254 10.0.0.1 255.255.255.255 UGH 100 0 0 eth0
172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0
sh-4.4# route del -net 10.128.0.0 gw 0.0.0.0 netmask 255.252.0.0 tun0
sh-4.4# route del -net 172.30.0.0 gw 0.0.0.0 netmask 255.255.0.0 tun0
see the definition of job object
$ cat job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
parallelism: 1
completions: 1
activeDeadlineSeconds: 1800
backoffLimit: 6
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: OnFailure
Create the job and see the status of job and pods - both are completed/successful.
$ oc create -f job.yaml
job.batch/pi created
$ oc get jobs
NAME COMPLETIONS DURATION AGE
pi 1/1 31s 43s
$ oc get pods
NAME READY STATUS RESTARTS AGE
pi-7fnb7 0/1 Completed 0 49s
ruby-hello-world-1-build 0/1 Completed 0 4m8s
ruby-hello-world-6856ff6f59-prff2 1/1 Running 0 2m55s
Moved the bug to verified as the sanity tests works as expected.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2022:5069
Performed inline steps to verify the issue and checks against the test steps are passed. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-03-29-152521 True False 3h38m Cluster version is 4.11.0-0.nightly-2022-03-29-152521 Get the clusterip pattern & pods $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 4h5m openshift ExternalName <none> kubernetes.default.svc.cluster.local <none> 4h ruby-hello-world ClusterIP 172.30.240.32 <none> 8080/TCP 31s $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ruby-hello-world-1-build 1/1 Running 0 43s 10.128.2.12 jmekkatt-fel-7676m-worker-northcentralus-5kpvm <none> <none> Get the master nodes $ oc get nodes | grep master jmekkatt-fel-7676m-master-0 Ready master 4h3m v1.23.3+54654d2 jmekkatt-fel-7676m-master-1 Ready master 4h2m v1.23.3+54654d2 jmekkatt-fel-7676m-master-2 Ready master 4h3m v1.23.3+54654d2 Remove node to pod networking route entries from one of the master. $ oc debug node/jmekkatt-fel-7676m-master-0 Starting pod/jmekkatt-fel-7676m-master-0-debug ... sh-4.4# chroot /host sh-4.4# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.0.1 0.0.0.0 UG 100 0 0 eth0 10.0.0.0 0.0.0.0 255.255.128.0 U 100 0 0 eth0 10.128.0.0 0.0.0.0 255.252.0.0 U 0 0 0 tun0 168.63.129.16 10.0.0.1 255.255.255.255 UGH 100 0 0 eth0 169.254.169.254 10.0.0.1 255.255.255.255 UGH 100 0 0 eth0 172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0 sh-4.4# route del -net 10.128.0.0 gw 0.0.0.0 netmask 255.252.0.0 tun0 sh-4.4# route del -net 172.30.0.0 gw 0.0.0.0 netmask 255.255.0.0 tun0 see the definition of job object $ cat job.yaml apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 completions: 1 activeDeadlineSeconds: 1800 backoffLimit: 6 template: metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure Create the job and see the status of job and pods - both are completed/successful. $ oc create -f job.yaml job.batch/pi created $ oc get jobs NAME COMPLETIONS DURATION AGE pi 1/1 31s 43s $ oc get pods NAME READY STATUS RESTARTS AGE pi-7fnb7 0/1 Completed 0 49s ruby-hello-world-1-build 0/1 Completed 0 4m8s ruby-hello-world-6856ff6f59-prff2 1/1 Running 0 2m55s Moved the bug to verified as the sanity tests works as expected.