Bug 1670280 - Improper CVO status for openshift-cluster-samples-operator
Summary: Improper CVO status for openshift-cluster-samples-operator
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: ImageStreams
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.1.0
Assignee: Gabe Montero
QA Contact: XiuJuan Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-29 06:52 UTC by XiuJuan Wang
Modified: 2019-06-04 10:42 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:42:25 UTC

Description XiuJuan Wang 2019-01-29 06:52:41 UTC
Description of problem:
When samples operator set to Removed, the Processing of openshift-cluster-samples-operator cvo status should not show error.

Version-Release number of selected component (if applicable):
$  oc get  clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.nightly-2019-01-29-025207   True        False         1h        Cluster version is 4.0.0-0.nightly-2019-01-29-025207

How reproducible:
always

Steps to Reproduce:
1.Create samples-registry-credentials secret under openshift-cluster-samples-operator
2.Set samples operator to Removed , then set to Managed & installtype to rhel
3.After all imagestreams import succeed, Set samples operator to Removed
4.Check openshift-cluster-samples-operator  cvo status

Actual results:
step 4: Failing is true without detailed reason, Processing is true with error

$ oc describe clusteroperator  openshift-cluster-samples-operator 
Name:         openshift-cluster-samples-operator
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-01-29T04:55:08Z
  Generation:          1
  Resource Version:    77377
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/openshift-cluster-samples-operator
  UID:                 0d56903e-2382-11e9-8599-0232fe121a86
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-01-29T06:37:52Z
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-01-29T06:37:52Z
    Message:               Samples installation in error at v4.0.0-0.149.0.0-2ee54c9ca: image pull credentials needed
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-01-29T06:37:52Z
    Message:               Samples installation in error at v4.0.0-0.149.0.0-2ee54c9ca: 
    Status:                True
    Type:                  Failing
  Extension:               <nil>
  Version:                 
Events:                    <none>


Expected results:
cvo status should show 'removed' when samples operator set to removed.

Additional info:
When imageregistry operator set to removed, the cvo status correct.

$ oc get clusteroperator cluster-image-registry-operator  -o yaml 
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: 2019-01-29T04:51:35Z
  generation: 1
  name: cluster-image-registry-operator
  resourceVersion: "63793"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/cluster-image-registry-operator
  uid: 8e3136c9-2381-11e9-97e9-0a8b3de1edd6
spec: {}
status:
  conditions:
  - lastTransitionTime: 2019-01-29T06:18:31Z
    message: Deployment is being deleted
    status: "False"
    type: Available
  - lastTransitionTime: 2019-01-29T06:18:30Z
    message: registry is being removed
    status: "True"
    type: Progressing
  - lastTransitionTime: 2019-01-29T04:51:37Z
    status: "False"
    type: Failing
  extension: null
  version: v4.0.0-0.148.0.0-dirty

Comment 1 Gabe Montero 2019-02-06 16:58:21 UTC
OK, with the latest sample operator defaulting to rhel, I modified the repro steps to:

1.Create samples-registry-credentials secret under openshift-cluster-samples-operator
2.Set samples operator to Removed , then set to Managed & change samples registry to registry.redhat.io
3.After all imagestreams import succeed, Set samples operator to Removed
4.Check openshift-cluster-samples-operator  cvo status


Now, the cvo status showing removed means that Available is False.  There will be no other messages about the samples operator being in removed state.

That is finer grained detail that has been deemed unnecessary for the cvo status.

So what appeared in XiuJuan's description for Available being false as the indication.

That said, I also still see the prior messages for Progressing and Failing that XiuJuan saw, and I agree that should be cleaned up.

I'll start working on a change for that.

Comment 2 Gabe Montero 2019-02-06 17:23:03 UTC
OK verified fix locally ... PR https://github.com/openshift/cluster-samples-operator/pull/93 is up

Comment 3 Gabe Montero 2019-02-07 14:04:04 UTC
PR has merged

Comment 4 XiuJuan Wang 2019-02-14 09:51:16 UTC
1.Create samples-registry-credentials secret under openshift-cluster-samples-operator(since the operator has watched core-pull-secret under kube-system, so this step could be ingored)
2.When set Managedstate to Removed, then set to Managed & change samples registry to registry.redhat.io
3.After all imagestreams import succeed, Set samples operator to Removed
4.Check openshift-cluster-samples-operator  cvo status
Available -> false
Processing-> false without error
Failing -> false without error

$ oc get clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.nightly-2019-02-13-204401   True        False         112m      Cluster version is 4.0.0-0.nightly-2019-02-13-204401

Comment 7 errata-xmlrpc 2019-06-04 10:42:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.