Bug 1844998
| Summary: | Option '--save-config' does not work with command `oc create job` | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | zhou ying <yinzhou> |
| Component: | oc | Assignee: | Maciej Szulik <maszulik> |
| Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 4.5 | CC: | aos-bugs, jokerman, knarra, mfojtik |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause:
oc create job command was missing wiring responsible for --save-config flag.
Consequence:
Option --save-config didn't do what it was meant for.
Fix:
Wire --save-config flag logic.
Result:
Option --save-config works as expected.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-27 16:05:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Looks this was never wired in any of the commands, I'm going to deprecate that flag upstream and remove in the long run. Actually, it's a bug, since some create commands do support it, so I've opened https://github.com/kubernetes/kubernetes/pull/91901 and we'll get this with the next k8s bump. This will be picked with the next k8s bump. Waiting for next k8s bump. checked with Client Version: 4.6.0-202007221854.p0-5f270d5, still could reproduce the issue now:
[zhouying@dhcp-140-138 ~]$ oc create job my-job --image=busybox --save-config=true
job.batch/my-job created
[zhouying@dhcp-140-138 ~]$ oc get job.batch/my-job -o yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2020-07-23T03:39:28Z"
labels:
controller-uid: 2664c6b5-d6bc-4fb7-9f59-d6c1517aabc2
job-name: my-job
managedFields:
- apiVersion: batch/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:backoffLimit: {}
f:completions: {}
f:parallelism: {}
f:template:
f:spec:
f:containers:
k:{"name":"my-job"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: oc
operation: Update
time: "2020-07-23T03:39:28Z"
- apiVersion: batch/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:completionTime: {}
f:conditions:
.: {}
k:{"type":"Complete"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:startTime: {}
f:succeeded: {}
manager: kube-controller-manager
operation: Update
time: "2020-07-23T03:39:33Z"
name: my-job
namespace: zhouy
resourceVersion: "85813"
selfLink: /apis/batch/v1/namespaces/zhouy/jobs/my-job
uid: 2664c6b5-d6bc-4fb7-9f59-d6c1517aabc2
Right, oc bits merged with linked PR, the remaining bit is updated k8s version in oc. So this is waiting for that. K8s was bumped in https://github.com/openshift/oc/pull/491 Verified bug in the oc version below and i see that annotations are getting stored when using --save-config=true option.
[ramakasturinarra@dhcp35-60 ~]$ oc version -o yaml
clientVersion:
buildDate: "2020-08-21T02:37:08Z"
compiler: gc
gitCommit: ea0d54068621ec0f95973068729f739f3dacfef7
gitTreeState: clean
gitVersion: 4.6.0-202008210209.p0-ea0d540
goVersion: go1.14.4
major: ""
minor: ""
platform: linux/amd64
openshiftVersion: 4.6.0-0.nightly-2020-08-23-214712
serverVersion:
buildDate: "2020-08-20T16:46:57Z"
compiler: gc
gitCommit: 3e083ac29409923906267ebcc5f8e0aa13072c72
gitTreeState: dirty
gitVersion: v1.19.0-rc.2+3e083ac-dirty
goVersion: go1.14.4
major: "1"
minor: 19+
platform: linux/amd64
[ramakasturinarra@dhcp35-60 ~]$ oc create job my-job --image=busybox --save-config=true
job.batch/my-job created
[ramakasturinarra@dhcp35-60 ~]$ oc get job.batch/my-job -o yaml
apiVersion: batch/v1
kind: Job
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"kind":"Job","apiVersion":"batch/v1","metadata":{"name":"my-job","creationTimestamp":null},"spec":{"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":[{"name":"my-job","image":"busybox","resources":{}}],"restartPolicy":"Never"}}},"status":{}}
creationTimestamp: "2020-08-24T07:31:25Z"
labels:
controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
job-name: my-job
managedFields:
- apiVersion: batch/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:backoffLimit: {}
f:completions: {}
f:parallelism: {}
f:template:
f:spec:
f:containers:
k:{"name":"my-job"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kubectl-create
operation: Update
time: "2020-08-24T07:31:25Z"
- apiVersion: batch/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:completionTime: {}
f:conditions:
.: {}
k:{"type":"Complete"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
f:startTime: {}
f:succeeded: {}
manager: kube-controller-manager
operation: Update
time: "2020-08-24T07:31:30Z"
name: my-job
namespace: default
resourceVersion: "117430"
selfLink: /apis/batch/v1/namespaces/default/jobs/my-job
uid: 16cac355-712d-433d-942b-3f31ef371f48
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
job-name: my-job
spec:
containers:
- image: busybox
imagePullPolicy: Always
name: my-job
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
completionTime: "2020-08-24T07:31:30Z"
conditions:
- lastProbeTime: "2020-08-24T07:31:30Z"
lastTransitionTime: "2020-08-24T07:31:30Z"
status: "True"
type: Complete
startTime: "2020-08-24T07:31:25Z"
succeeded: 1
Moving the bug to verified based on the above.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |
Description of problem: Try command `oc create job my-job --image=busybox --save-config=true` does not save the configuration of current object in its annotation Version-Release number of selected component (if applicable): [root@dhcp-140-138 ~]# oc version Client Version: 4.5.0-202006061517-711c56a Server Version: 4.5.0-0.nightly-2020-06-07-080121 Kubernetes Version: v1.18.3+a637491 How reproducible: Always Steps to Reproduce: 1. `oc create job my-job --image=busybox --save-config=true` to create job; Actual results: 1. Do not save the configuration of current object in the annotation: [root@dhcp-140-138 ~]# oc get job.batch/my-job -o yaml apiVersion: batch/v1 kind: Job metadata: creationTimestamp: "2020-06-08T08:31:34Z" labels: controller-uid: 74f06bd8-d8c3-48ab-9bab-1157c7e1c619 job-name: my-job managedFields: ..... name: my-job namespace: bugv resourceVersion: "171138" selfLink: /apis/batch/v1/namespaces/bugv/jobs/my-job uid: 74f06bd8-d8c3-48ab-9bab-1157c7e1c619 spec: .... Expected results: 1. Should save the configuration of current object in its annotation Additional info: `oc create cronjob .. --save-config=true` also has the same issue.