Bug 1608128 - the "apiVersion: extensions/v1beta1" of olm deployment should be replaced with "apiVersion: v1"
Summary: the "apiVersion: extensions/v1beta1" of olm deployment should be replaced wit...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.11.0
Assignee: Evan Cordell
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-25 03:16 UTC by Jian Zhang
Modified: 2018-10-11 07:22 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-11 07:22:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2652 0 None None None 2018-10-11 07:22:29 UTC

Description Jian Zhang 2018-07-25 03:16:52 UTC
Description of problem:
apps/v1 has GA, so we recommend updating the apiversion from extensions/v1beta1 to apps/v1 in the olm deployment.

Version-Release number of selected component (if applicable):
OCP 3.11
ansible 2.6 and openshift-ansible master branch.
oc v3.11.0-0.9.0

How reproducible:
always

Steps to Reproduce:
1. Build an OCP 3.11 cluster with the olm enabled.
openshift_ansible_vars:
  openshift_enable_olm: true

2. Check the "apiVersion" field of the olm deployment.
3.

Actual results:
[root@qe-jiazha-311master-etcd-1 ~]# oc get deployment -o yaml | grep apiVersion
apiVersion: v1
- apiVersion: extensions/v1beta1
                apiVersion: v1
- apiVersion: extensions/v1beta1


Expected results:
Should be "apiVersion: v1", not the "apiVersion: extensions/v1beta1"

Additional info:
The relate openshift-ansible PR and comments is here: https://github.com/openshift/openshift-ansible/pull/9163#discussion-diff-204823752R3

Comment 1 Scott Dodson 2018-08-09 14:13:25 UTC
https://github.com/openshift/openshift-ansible/pull/9503

not sure if this is correct or not

Comment 2 Evan Cordell 2018-08-10 15:05:20 UTC
This is fixed and merged into master.

There's another pending PR to remove files that use the old extensions group, but they're not included in the install anyway, so it should affect this report.

https://github.com/openshift/openshift-ansible/pull/9527

Comment 3 Jian Zhang 2018-08-15 05:32:58 UTC
I used the latest master branch to install the olm component. But the apiVersion is still "extensions/v1beta1". Verify failed.

[root@qe-jiazha-round3master-etcd-1 ~]# oc get pods
NAME                                READY     STATUS    RESTARTS   AGE
alm-operator-798c765f5c-f5rqd       1/1       Running   0          2h
catalog-operator-548958ff7f-ps8j2   1/1       Running   0          2h
[root@qe-jiazha-round3master-etcd-1 ~]# oc get deployment -o yaml | grep apiVersion
apiVersion: v1
- apiVersion: extensions/v1beta1
                apiVersion: v1
- apiVersion: extensions/v1beta1
[root@qe-jiazha-round3master-etcd-1 ~]# oc get pods -o yaml | grep apiVersion
apiVersion: v1
- apiVersion: v1
    - apiVersion: apps/v1
            apiVersion: v1
- apiVersion: v1
    - apiVersion: apps/v1


[jzhang@localhost openshift-ansible]$ git log
commit 734bd6e878b61b01556e07284d839ae92a104159
Merge: 5032be3 c5616bf
Author: OpenShift Merge Robot <openshift-merge-robot.github.com>
Date:   Tue Aug 14 18:05:57 2018 -0700

    Merge pull request #9582 from dav1x/chg_vsphere_sc_def_name
    
    change default sc name

Comment 4 Evan Cordell 2018-08-16 12:08:00 UTC
The files are apps/v1 in master (and has been for 9 days): https://github.com/openshift/openshift-ansible/blob/master/roles/olm/files/12-alm-operator.deployment.yaml#L3
https://github.com/openshift/openshift-ansible/blob/master/roles/olm/files/13-catalog-operator.deployment.yaml#L3

Can I see the full output from `oc get deployment -o yaml`? I can't reproduce this.

Comment 5 Jian Zhang 2018-08-20 05:32:43 UTC
Sure, below is the full output, I used the master branch of the openshift-ansible to install it.

[root@qe-jiazha-round3master-etcd-1 ~]# oc version
oc v3.11.0-0.17.0
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://qe-jiazha-round3master-etcd-1:8443
openshift v3.11.0-0.17.0
kubernetes v1.11.0+d4cacc0

[root@qe-jiazha-round3master-etcd-1 ~]# oc get deployment catalog-operator -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-08-20T01:42:14Z
  generation: 1
  labels:
    app: catalog-operator
  name: catalog-operator
  namespace: operator-lifecycle-manager
  resourceVersion: "4305"
  selfLink: /apis/extensions/v1beta1/namespaces/operator-lifecycle-manager/deployments/catalog-operator
  uid: 437f16c3-a41a-11e8-b134-42010af0001b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: catalog-operator
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: catalog-operator
    spec:
      containers:
      - command:
        - /bin/catalog
        - -namespace
        - operator-lifecycle-manager
        - -debug
        image: quay.io/coreos/catalog@sha256:20886d49205aa8d8fd53f1c85fad6a501775226da25ef14f51258b7066e91064
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: catalog-operator
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: coreos-pull-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: olm-operator-serviceaccount
      serviceAccountName: olm-operator-serviceaccount
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-08-20T01:42:33Z
    lastUpdateTime: 2018-08-20T01:42:33Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2018-08-20T01:42:14Z
    lastUpdateTime: 2018-08-20T01:42:33Z
    message: ReplicaSet "catalog-operator-548958ff7f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

[root@qe-jiazha-round3master-etcd-1 ~]# oc get deployment alm-operator -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-08-20T01:42:11Z
  generation: 1
  labels:
    app: alm-operator
  name: alm-operator
  namespace: operator-lifecycle-manager
  resourceVersion: "4302"
  selfLink: /apis/extensions/v1beta1/namespaces/operator-lifecycle-manager/deployments/alm-operator
  uid: 421a3510-a41a-11e8-b134-42010af0001b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: alm-operator
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: alm-operator
    spec:
      containers:
      - command:
        - /bin/olm
        env:
        - name: OPERATOR_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: OPERATOR_NAME
          value: alm-operator
        image: quay.io/coreos/olm@sha256:44b445850b3e612c062424c3727bb85048ec8e71407b39985786d29aa20f5c79
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: alm-operator
        ports:
        - containerPort: 8080
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: coreos-pull-secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: olm-operator-serviceaccount
      serviceAccountName: olm-operator-serviceaccount
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2018-08-20T01:42:32Z
    lastUpdateTime: 2018-08-20T01:42:32Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2018-08-20T01:42:12Z
    lastUpdateTime: 2018-08-20T01:42:32Z
    message: ReplicaSet "alm-operator-798c765f5c" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

PS: below is full output of pods.
[root@qe-jiazha-round3master-etcd-1 ~]# oc get pods -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: anyuid
    creationTimestamp: 2018-08-20T01:42:12Z
    generateName: alm-operator-798c765f5c-
    labels:
      app: alm-operator
      pod-template-hash: "3547321917"
    name: alm-operator-798c765f5c-8h9t2
    namespace: operator-lifecycle-manager
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: alm-operator-798c765f5c
      uid: 42258764-a41a-11e8-b134-42010af0001b
    resourceVersion: "4300"
    selfLink: /api/v1/namespaces/operator-lifecycle-manager/pods/alm-operator-798c765f5c-8h9t2
    uid: 4234dc2e-a41a-11e8-b134-42010af0001b
  spec:
    containers:
    - command:
      - /bin/olm
      env:
      - name: OPERATOR_NAMESPACE
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace
      - name: OPERATOR_NAME
        value: alm-operator
      image: quay.io/coreos/olm@sha256:44b445850b3e612c062424c3727bb85048ec8e71407b39985786d29aa20f5c79
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 3
        httpGet:
          path: /healthz
          port: 8080
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      name: alm-operator
      ports:
      - containerPort: 8080
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /healthz
          port: 8080
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources: {}
      securityContext:
        capabilities:
          drop:
          - MKNOD
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: olm-operator-serviceaccount-token-bvltt
        readOnly: true
    dnsPolicy: ClusterFirst
    imagePullSecrets:
    - name: coreos-pull-secret
    nodeName: qe-jiazha-round3node-registry-router-1
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seLinuxOptions:
        level: s0:c17,c9
    serviceAccount: olm-operator-serviceaccount
    serviceAccountName: olm-operator-serviceaccount
    terminationGracePeriodSeconds: 30
    volumes:
    - name: olm-operator-serviceaccount-token-bvltt
      secret:
        defaultMode: 420
        secretName: olm-operator-serviceaccount-token-bvltt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:12Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:32Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: null
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:12Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://0bd130b915d41a120c0417662c327c53395967be0ceb286060b0aedbe7a4743e
      image: quay.io/coreos/olm@sha256:44b445850b3e612c062424c3727bb85048ec8e71407b39985786d29aa20f5c79
      imageID: docker-pullable://quay.io/coreos/olm@sha256:44b445850b3e612c062424c3727bb85048ec8e71407b39985786d29aa20f5c79
      lastState: {}
      name: alm-operator
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2018-08-20T01:42:25Z
    hostIP: 10.240.0.28
    phase: Running
    podIP: 10.129.0.7
    qosClass: BestEffort
    startTime: 2018-08-20T01:42:12Z
- apiVersion: v1
  kind: Pod
  metadata:
    annotations:
      openshift.io/scc: anyuid
    creationTimestamp: 2018-08-20T01:42:14Z
    generateName: catalog-operator-548958ff7f-
    labels:
      app: catalog-operator
      pod-template-hash: "1045149939"
    name: catalog-operator-548958ff7f-d2lqn
    namespace: operator-lifecycle-manager
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: catalog-operator-548958ff7f
      uid: 43803ca5-a41a-11e8-b134-42010af0001b
    resourceVersion: "4303"
    selfLink: /api/v1/namespaces/operator-lifecycle-manager/pods/catalog-operator-548958ff7f-d2lqn
    uid: 43897b75-a41a-11e8-b134-42010af0001b
  spec:
    containers:
    - command:
      - /bin/catalog
      - -namespace
      - operator-lifecycle-manager
      - -debug
      image: quay.io/coreos/catalog@sha256:20886d49205aa8d8fd53f1c85fad6a501775226da25ef14f51258b7066e91064
      imagePullPolicy: IfNotPresent
      livenessProbe:
        failureThreshold: 3
        httpGet:
          path: /healthz
          port: 8080
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      name: catalog-operator
      ports:
      - containerPort: 8080
        protocol: TCP
      readinessProbe:
        failureThreshold: 3
        httpGet:
          path: /healthz
          port: 8080
          scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources: {}
      securityContext:
        capabilities:
          drop:
          - MKNOD
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
      - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
        name: olm-operator-serviceaccount-token-bvltt
        readOnly: true
    dnsPolicy: ClusterFirst
    imagePullSecrets:
    - name: coreos-pull-secret
    nodeName: qe-jiazha-round3node-registry-router-1
    priority: 0
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext:
      seLinuxOptions:
        level: s0:c17,c9
    serviceAccount: olm-operator-serviceaccount
    serviceAccountName: olm-operator-serviceaccount
    terminationGracePeriodSeconds: 30
    volumes:
    - name: olm-operator-serviceaccount-token-bvltt
      secret:
        defaultMode: 420
        secretName: olm-operator-serviceaccount-token-bvltt
  status:
    conditions:
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:14Z
      status: "True"
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:33Z
      status: "True"
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: null
      status: "True"
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: 2018-08-20T01:42:14Z
      status: "True"
      type: PodScheduled
    containerStatuses:
    - containerID: docker://7558fae6fab4850308faca12f041865bbd1455d07dc37a4d1a045fd5a6383012
      image: quay.io/coreos/catalog@sha256:20886d49205aa8d8fd53f1c85fad6a501775226da25ef14f51258b7066e91064
      imageID: docker-pullable://quay.io/coreos/catalog@sha256:20886d49205aa8d8fd53f1c85fad6a501775226da25ef14f51258b7066e91064
      lastState: {}
      name: catalog-operator
      ready: true
      restartCount: 0
      state:
        running:
          startedAt: 2018-08-20T01:42:26Z
    hostIP: 10.240.0.28
    phase: Running
    podIP: 10.129.0.8
    qosClass: BestEffort
    startTime: 2018-08-20T01:42:14Z
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comment 6 Evan Cordell 2018-08-21 13:16:47 UTC
I'm not sure how to resolve this. The files that I linked to are clearly apps/v1, and they're applied to the cluster with:

https://github.com/openshift/openshift-ansible/blob/master/roles/olm/tasks/install.yaml#L92-L108

My only thought is that the `oc_obj` command in openshift-ansible sets the type to extensions/v1beta1 because I have kind: Deployment set. 

Is this not an issue for any other component using `oc_obj`?

Comment 7 Jian Zhang 2018-08-22 02:38:05 UTC
Thanks @Evan, I believe this is another `oc` default displaying issue. I will sync with the master team with this issue. For this bug, LGTM.

[root@qe-share-311-master-etcd-1 ~]# oc get deployment.v1.apps -o yaml -n operator-lifecycle-manager
apiVersion: v1
items:
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    creationTimestamp: 2018-08-21T02:56:42Z
    generation: 1
    labels:
      app: alm-operator
    name: alm-operator
    namespace: operator-lifecycle-manager
    resourceVersion: "90255"
    selfLink: /apis/apps/v1/namespaces/operator-lifecycle-manager/deployments/alm-operator
    uid: d5512e8d-a4ed-11e8-880d-0050569f5ef1
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
...

Comment 9 errata-xmlrpc 2018-10-11 07:22:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2652


Note You need to log in before you can comment on or make changes to this bug.