Description of problem: OPSRC's "registryNamespace" can be changed to empty string "" success,then the delete operation of the opsrc will block Version-Release number of selected component (if applicable): cv:4.0.0-0.nightly-2019-03-25-180911 How reproducible: always Steps to Reproduce: 1. create one opsrc 'test' success with the `registryNamespace` of "redhat-operators" #oc create -f test.yaml ############### apiVersion: "operators.coreos.com/v1" kind: "OperatorSource" metadata: name: "testkey" namespace: "openshift-marketplace" spec: type: appregistry endpoint: "https://quay.io/cnr" registryNamespace: "redhat-operators" ############### 2. edit the `registryNamespace` to empty string : "" 3. delete the opsrc "test" #oc delete opsrc test Actual results: 1.after step 2, the opsrc test still success #oc describe opsrc test Name: testkey Namespace: openshift-marketplace Labels: opsrc-provider=testkey Annotations: <none> API Version: operators.coreos.com/v1 Kind: OperatorSource Metadata: Creation Timestamp: 2019-03-28T07:24:45Z Finalizers: finalizer.operatorsources.operators.coreos.com Generation: 1 Resource Version: 179305 Self Link: /apis/operators.coreos.com/v1/namespaces/openshift-marketplace/operatorsources/testkey UID: 8fa533b7-512a-11e9-9292-02286da6d306 Spec: Authorization Token: Secret Name: mysecret Display Name: testkey Endpoint: https://quay.io/cnr Publisher: testkey Registry Namespace: Type: appregistry Status: Current Phase: Last Transition Time: 2019-03-28T07:25:38Z Last Update Time: 2019-03-28T07:25:38Z Phase: Message: The object has been successfully reconciled Name: Succeeded Packages: amq-streams Events: <none> 2.after step 3, the delete operation will block without any error message Expected results: 1.shouldn't allow the registryNamespace to be "" (empty string) Additional info:
Investigation: I checked the api audit log and saw the following error ``` ip-10-0-153-11.ec2.internal {"kind":"Event","apiVersion":"audit.k8s.io/v1beta1","metadata": {"creationTimestamp":"2019-03-28T19:25:30Z"} ,"level":"Metadata","timestamp":"2019-03-28T19:25:30Z","auditID":"d3995eb8-4448-4cbd-a6cd-202f0a414fad","stage":"ResponseComplete","requestURI":"/apis/operators.coreos.com/v1/namespaces/openshift-marketplace/operatorsources/test","verb":"update","user":{"username":"kube:admin","groups":["system:cluster-admins","system:authenticated"],"extra":{"scopes.authorization.openshift.io":["user:full"]}},"sourceIPs":["10.0.72.114"],"userAgent":"main/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"operatorsources","namespace":"openshift-marketplace","name":"test","uid":"c83f183f-518a-11e9-90fd-0a5197e23be2","apiGroup":"operators.coreos.com","apiVersion":"v1","resourceVersion":"157495"},"responseStatus":{"metadata":{},"status":"Failure","reason":"Invalid","code":422},"requestReceivedTimestamp":"2019-03-28T19:25:30.451800Z","stageTimestamp":"2019-03-28T19:25:30.453106Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"cluster-admins\" of ClusterRole \"cluster-admin\" to Group \"system:cluster-admins\""}} ``` This indicates that update to the object continues to fail and this is the reason why we see no change to its status. Thus it gets stuck in the loop and keeps retrying. After step 2, the client in marketplace operator fails to update the object thereon since `RegistryNamespace` is a required field. We also do not log the update error originating from reconciliation logic. We should log the error here - https://github.com/operator-framework/operator-marketplace/blob/master/pkg/operatorsource/handler.go#L110 Workaround: oc edit and set registryNamespace to a valid value. Corresponding Jira ticket - https://jira.coreos.com/browse/MKTPLC-329
PR to fix the logging issue - https://github.com/operator-framework/operator-marketplace/pull/180
The latest nightly build (4.1.0-0.nightly-2019-05-08-220123) doesn't include the fix PR, will verified this bug when the build include it.
The latest nightly build (4.1.0-0.nightly-2019-05-09-041615) doesn't include the fix PR, will verified this bug when the build include it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758