Bug 1844998 - Option '--save-config' does not work with command `oc create job`
Summary: Option '--save-config' does not work with command `oc create job`
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: oc
Version: 4.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.6.0
Assignee: Maciej Szulik
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-08 09:05 UTC by zhou ying
Modified: 2020-10-27 16:06 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: oc create job command was missing wiring responsible for --save-config flag. Consequence: Option --save-config didn't do what it was meant for. Fix: Wire --save-config flag logic. Result: Option --save-config works as expected.
Clone Of:
Environment:
Last Closed: 2020-10-27 16:05:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift oc pull 460 0 None closed Bug 1844998: Fix --save-config for kubectl create commands, where it was missing 2021-01-25 12:48:05 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:06:27 UTC

Description zhou ying 2020-06-08 09:05:51 UTC
Description of problem:
Try command `oc create job my-job --image=busybox --save-config=true` does not save the configuration of current object in its annotation

Version-Release number of selected component (if applicable):
[root@dhcp-140-138 ~]# oc version 
Client Version: 4.5.0-202006061517-711c56a
Server Version: 4.5.0-0.nightly-2020-06-07-080121
Kubernetes Version: v1.18.3+a637491

How reproducible:
Always

Steps to Reproduce:
1. `oc create job my-job --image=busybox --save-config=true` to create job;

Actual results:
1. Do not save the configuration of current object in the annotation:

[root@dhcp-140-138 ~]# oc get job.batch/my-job -o yaml 
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2020-06-08T08:31:34Z"
  labels:
    controller-uid: 74f06bd8-d8c3-48ab-9bab-1157c7e1c619
    job-name: my-job
  managedFields:
     .....
  name: my-job
  namespace: bugv
  resourceVersion: "171138"
  selfLink: /apis/batch/v1/namespaces/bugv/jobs/my-job
  uid: 74f06bd8-d8c3-48ab-9bab-1157c7e1c619
spec:
     ....

Expected results:
1. Should save the configuration of current object in its annotation 

Additional info:
`oc create cronjob .. --save-config=true` also has the same issue.

Comment 1 Maciej Szulik 2020-06-08 09:23:42 UTC
Looks this was never wired in any of the commands, I'm going to deprecate that flag upstream and remove in the long run.

Comment 2 Maciej Szulik 2020-06-08 12:06:08 UTC
Actually, it's a bug, since some create commands do support it, so I've opened https://github.com/kubernetes/kubernetes/pull/91901
and we'll get this with the next k8s bump.

Comment 3 Maciej Szulik 2020-06-18 10:18:02 UTC
This will be picked with the next k8s bump.

Comment 4 Maciej Szulik 2020-07-09 11:05:57 UTC
Waiting for next k8s bump.

Comment 7 zhou ying 2020-07-23 03:50:29 UTC
checked with Client Version: 4.6.0-202007221854.p0-5f270d5, still could reproduce the issue now:

[zhouying@dhcp-140-138 ~]$ oc create job my-job --image=busybox --save-config=true
job.batch/my-job created
[zhouying@dhcp-140-138 ~]$ oc get job.batch/my-job -o yaml
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2020-07-23T03:39:28Z"
  labels:
    controller-uid: 2664c6b5-d6bc-4fb7-9f59-d6c1517aabc2
    job-name: my-job
  managedFields:
  - apiVersion: batch/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:backoffLimit: {}
        f:completions: {}
        f:parallelism: {}
        f:template:
          f:spec:
            f:containers:
              k:{"name":"my-job"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: oc
    operation: Update
    time: "2020-07-23T03:39:28Z"
  - apiVersion: batch/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:completionTime: {}
        f:conditions:
          .: {}
          k:{"type":"Complete"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:startTime: {}
        f:succeeded: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-07-23T03:39:33Z"
  name: my-job
  namespace: zhouy
  resourceVersion: "85813"
  selfLink: /apis/batch/v1/namespaces/zhouy/jobs/my-job
  uid: 2664c6b5-d6bc-4fb7-9f59-d6c1517aabc2

Comment 8 Maciej Szulik 2020-07-23 11:45:01 UTC
Right, oc bits merged with linked PR, the remaining bit is updated k8s version in oc. So this is waiting for that.

Comment 9 Maciej Szulik 2020-08-21 12:05:40 UTC
K8s was bumped in https://github.com/openshift/oc/pull/491

Comment 11 RamaKasturi 2020-08-24 07:33:29 UTC
Verified bug in the oc version below and i see that annotations are getting stored when using --save-config=true option.
[ramakasturinarra@dhcp35-60 ~]$ oc version -o yaml
clientVersion:
  buildDate: "2020-08-21T02:37:08Z"
  compiler: gc
  gitCommit: ea0d54068621ec0f95973068729f739f3dacfef7
  gitTreeState: clean
  gitVersion: 4.6.0-202008210209.p0-ea0d540
  goVersion: go1.14.4
  major: ""
  minor: ""
  platform: linux/amd64
openshiftVersion: 4.6.0-0.nightly-2020-08-23-214712
serverVersion:
  buildDate: "2020-08-20T16:46:57Z"
  compiler: gc
  gitCommit: 3e083ac29409923906267ebcc5f8e0aa13072c72
  gitTreeState: dirty
  gitVersion: v1.19.0-rc.2+3e083ac-dirty
  goVersion: go1.14.4
  major: "1"
  minor: 19+
  platform: linux/amd64


[ramakasturinarra@dhcp35-60 ~]$ oc create job my-job --image=busybox --save-config=true
job.batch/my-job created
[ramakasturinarra@dhcp35-60 ~]$ oc get job.batch/my-job -o yaml 
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"kind":"Job","apiVersion":"batch/v1","metadata":{"name":"my-job","creationTimestamp":null},"spec":{"template":{"metadata":{"creationTimestamp":null},"spec":{"containers":[{"name":"my-job","image":"busybox","resources":{}}],"restartPolicy":"Never"}}},"status":{}}
  creationTimestamp: "2020-08-24T07:31:25Z"
  labels:
    controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
    job-name: my-job
  managedFields:
  - apiVersion: batch/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:backoffLimit: {}
        f:completions: {}
        f:parallelism: {}
        f:template:
          f:spec:
            f:containers:
              k:{"name":"my-job"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl-create
    operation: Update
    time: "2020-08-24T07:31:25Z"
  - apiVersion: batch/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:completionTime: {}
        f:conditions:
          .: {}
          k:{"type":"Complete"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:startTime: {}
        f:succeeded: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-08-24T07:31:30Z"
  name: my-job
  namespace: default
  resourceVersion: "117430"
  selfLink: /apis/batch/v1/namespaces/default/jobs/my-job
  uid: 16cac355-712d-433d-942b-3f31ef371f48
spec:
  backoffLimit: 6
  completions: 1
  parallelism: 1
  selector:
    matchLabels:
      controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
  template:
    metadata:
      creationTimestamp: null
      labels:
        controller-uid: 16cac355-712d-433d-942b-3f31ef371f48
        job-name: my-job
    spec:
      containers:
      - image: busybox
        imagePullPolicy: Always
        name: my-job
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  completionTime: "2020-08-24T07:31:30Z"
  conditions:
  - lastProbeTime: "2020-08-24T07:31:30Z"
    lastTransitionTime: "2020-08-24T07:31:30Z"
    status: "True"
    type: Complete
  startTime: "2020-08-24T07:31:25Z"
  succeeded: 1

Moving the bug to verified based on the above.

Comment 13 errata-xmlrpc 2020-10-27 16:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.