Bug 2069310 - extend rest mappings with 'job' definition
Summary: extend rest mappings with 'job' definition
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.8
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: 4.9.z
Assignee: Abu Kashem
QA Contact: jmekkatt
URL:
Whiteboard: EmergencyRequest
Depends On: 2069311 2073153 2075043
Blocks: 2063953
TreeView+ depends on / blocked
 
Reported: 2022-03-28 17:15 UTC by Abu Kashem
Modified: 2022-05-03 07:35 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2063953
: 2069311 (view as bug list)
Environment:
Last Closed: 2022-05-03 07:35:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 1233 0 None open Bug 2069310: UPSTREAM: <carry>: use hardcoded rest mapper from library-go 2022-04-05 12:57:48 UTC
Github openshift library-go pull 1339 0 None open [release-4.9]: Bug 2069310: Extend rest mappings with job definition 2022-04-05 12:14:12 UTC
Red Hat Product Errata RHBA-2022:1605 0 None None None 2022-05-03 07:35:52 UTC

Comment 1 Christopher Brown 2022-04-12 08:28:48 UTC
Is this the correct 4.8 backport bug because the target release shows as 4.9.z?

Comment 2 Abu Kashem 2022-04-15 00:13:18 UTC
kewang,
https://github.com/openshift/kubernetes/pull/1233 needs cherry pick approval to merge and then verified. Once verified on 4.9 we will open a new PR for 4.8. Customer is blocked and waiting in the 4.8 fix.

Comment 5 jmekkatt 2022-04-18 14:40:26 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2022-04-18-075504   True        False         2m27s   Cluster version is 4.9.0-0.nightly-2022-04-18-075504
 
Created the ruby app and get the Service and pod assigned ip series.
$ oc get svc
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP                            PORT(S)   AGE
kubernetes   ClusterIP      172.30.0.1   <none>                                 443/TCP   24m
openshift    ExternalName   <none>       kubernetes.default.svc.cluster.local   <none>    16m
 
$ oc get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE                                                        NOMINATED NODE   READINESS GATES
ruby-hello-world-1-build   1/1     Running   0          17s   10.129.2.15   xxxxx-hmk-6zhcf-worker-b-txdgz.c.openshift-qe.internal   <none>           <none>
 
Get the nodes and chosen one of master node & remove the node to pod networking routes entries.
$ oc get nodes | grep master
xxxxx-hmk-6zhcf-master-0.c.openshift-qe.internal         Ready    master   22m   v1.22.8+c02bd9d
xxxxx-hmk-6zhcf-master-1.c.openshift-qe.internal         Ready    master   22m   v1.22.8+c02bd9d
xxxxx-hmk-6zhcf-master-2.c.openshift-qe.internal         Ready    master   22m   v1.22.8+c02bd9d
 
$ oc debug node/xxxxx-hmk-6zhcf-master-0.c.openshift-qe.internal
Starting pod/xxxxx-hmk-6zhcf-master-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.0.4
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    100    0        0 ens4
10.0.0.1        0.0.0.0         255.255.255.255 UH    100    0        0 ens4
10.128.0.0      0.0.0.0         255.252.0.0     U     0      0        0 tun0
172.30.0.0      0.0.0.0         255.255.0.0     U     0      0        0 tun0
sh-4.4# route del -net 10.128.0.0 gw 0.0.0.0 netmask 255.252.0.0 tun0
sh-4.4# route del -net 172.30.0.0 gw 0.0.0.0 netmask 255.255.0.0 tun0
sh-4.4# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    100    0        0 ens4
10.0.0.1        0.0.0.0         255.255.255.255 UH    100    0        0 ens4
 
Created inline job object , checked job status.
$ cat job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  parallelism: 1    
  completions: 1    
  activeDeadlineSeconds: 1800
  backoffLimit: 6  
  template:        
    metadata:
      name: pi
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: OnFailure    
$ oc create -f job.yaml
job.batch/pi created
 
$ oc get jobs
NAME   COMPLETIONS   DURATION   AGE
pi     1/1           33s        40s
 
Delete the existing pod and check the status.
 
$ oc delete pod ruby-hello-world-7d96bd5c7f-rw9xf
pod "ruby-hello-world-7d96bd5c7f-rw9xf" deleted
 
$ oc get pods
NAME                                READY   STATUS      RESTARTS   AGE
pi--1-5s449                         0/1     Completed   0          102s
ruby-hello-world-1-build            0/1     Completed   0          3m18s
ruby-hello-world-7d96bd5c7f-phvt6   1/1     Running     0          32s
 
Both jobs & pods are Completed/Running without any issue. Hence marked BZ verified

Comment 10 errata-xmlrpc 2022-05-03 07:35:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.31 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1605


Note You need to log in before you can comment on or make changes to this bug.