Bug 1678537 - defaultCertificateSecret is gone in clusteringress after change it to other value then change back to default
Summary: defaultCertificateSecret is gone in clusteringress after change it to other v...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.1.0
Assignee: Miciah Dashiel Butler Masters
QA Contact: Hongan Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-19 02:50 UTC by Hongan Li
Modified: 2019-06-04 10:44 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:44:14 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:44:20 UTC

Description Hongan Li 2019-02-19 02:50:18 UTC
Description of problem:
defaultCertificateSecret is gone in clusteringress after change it to other value then change back to default

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-02-17-024922

How reproducible:
always

Steps to Reproduce:
1. check the default clusteringress.
$oc get clusteringress default -o yaml -n openshift-ingress-operator

2. create a secret "mynew" in openshift-ingress namespace
$oc create secret tls mynew --cert=server.cert --key=server.key -n openshift-ingress

3. edit clusteringress and change defaultCertificateSecret to "mynew", and ensure it works well.
4. edit clusteringress and change defaultCertificateSecret to default "null".
5. check the clusteringress again.
$oc get clusteringress default -o yaml -n openshift-ingress-operator

Actual results:
In step1 it shows "defaultCertificateSecret: null",
but in step5, the defaultCertificateSecret is gone.

Expected results:
defaultCertificateSecret should be existed in clusteringress after change it back to default value.

Additional info:

Comment 1 Miciah Dashiel Butler Masters 2019-03-12 04:53:25 UTC
I could not reproduce the reported issue:

    % oc -n openshift-ingress-operator get ingresscontrollers/default  -o yaml
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      creationTimestamp: 2019-03-12T04:18:33Z
      finalizers:
      - ingress.openshift.io/ingress-controller
      generation: 3
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "22766"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: e62ead31-447d-11e9-9dbb-0a80bd621726
    spec:
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
    status:
      availableReplicas: 2
      domain: apps.mmasters.devcluster.openshift.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: app=router,router=router-default
    % oc create secret tls mynew --cert=/home/mmasters/src/httpclient/test/server.cert --key=/home/mmasters/src/httpclient/test/server.key -n openshift-ingress
    secret/mynew created
    % oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"defaultCertificate":{"name":"mynew"}}}'
    ingresscontroller.operator.openshift.io/default patched
    % oc -n openshift-ingress-operator get ingresscontrollers/default  -o yaml
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      creationTimestamp: 2019-03-12T04:18:33Z
      finalizers:
      - ingress.openshift.io/ingress-controller
      generation: 4
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "23948"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: e62ead31-447d-11e9-9dbb-0a80bd621726
    spec:
      defaultCertificate:
        name: mynew
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
    status:
      availableReplicas: 2
      domain: apps.mmasters.devcluster.openshift.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: app=router,router=router-default
    % oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"defaultCertificate":null}}'
    ingresscontroller.operator.openshift.io/default patched
    % oc -n openshift-ingress-operator get ingresscontrollers/default  -o yaml
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      creationTimestamp: 2019-03-12T04:18:33Z
      finalizers:
      - ingress.openshift.io/ingress-controller
      generation: 5
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "24120"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: e62ead31-447d-11e9-9dbb-0a80bd621726
    spec:
      nodePlacement:
        nodeSelector:
          matchLabels:
            node-role.kubernetes.io/worker: ""
    status:
      availableReplicas: 2
      domain: apps.mmasters.devcluster.openshift.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: app=router,router=router-default
    % 

Note that the API has changed some since this report was opened.  In particular, the "ClusterIngress" resource was renamed to "IngressController", its .spec.defaultCertificateSecret field was renamed to .spec.defaultCertificate, and this field's type was changed from a string pointer to a local object reference pointer.

However, even taking those API changes into account, the reported behavior is weird—I see no reason why you would ever see an explicit "null" value in the yaml output for that field.

Can you try to reproduce the problem again? If you still see it, can you include the full output of the `oc get` commands?

Comment 2 Hongan Li 2019-03-12 05:49:15 UTC
Thank you for your update, Miciah.

Since the latest nightly build QE can installed successfully is 4.0.0-0.nightly-2019-03-06-074438, and it is still using "ClusterIngress" resource, so I will check this later with latest build and let you know the result.

$ oc -n openshift-ingress-operator get ingresscontrollers
error: the server doesn't have a resource type "ingresscontrollers"

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-03-06-074438   True        False         22h     Cluster version is 4.0.0-0.nightly-2019-03-06-074438

Comment 4 Hongan Li 2019-03-14 05:47:14 UTC
verified with 4.0.0-0.nightly-2019-03-13-233958 and fixed

Comment 6 errata-xmlrpc 2019-06-04 10:44:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.