Description of problem: Installation of a addon is stuck in installing phase. redhat-operators catalog source is stuck in TRANSIENT_FAILURE. Version-Release number of selected component (if applicable): registry.redhat.io/redhat/redhat-operator-index:v4.10 How reproducible: 2/2 Steps to Reproduce: 1. Install provider: rosa create service --type ocs-provider-qe --name <cluster name> --machine-cidr 10.0.0.0/16 --size 20 --onboarding-validation-key <key> --subnet-ids subnets --notification-email-0 <mail> --notification-email-1 <mail> --notification-email-2 <mail> --region <region> 2. Wait for some time (I checked after 4 hours) 3. Check status of addons: rosa list addons -c <cluster name> 4. Check catalogsource: oc get catalogsource -n openshift-marketplace redhat-operators -o yaml Actual results: Addon ocs-provider-qe is still installing. catalog source is in TRANSIENT_FAILURE. Expected results: Addon should be installed. Additional info: $ oc get catalogsource -n openshift-marketplace redhat-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-06-22T09:33:42Z" generation: 1 managedFields: - apiVersion: operators.coreos.com/v1alpha1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:operatorframework.io/managed-by: {} f:target.workload.openshift.io/management: {} f:spec: .: {} f:displayName: {} f:grpcPodConfig: .: {} f:nodeSelector: .: {} f:kubernetes.io/os: {} f:node-role.kubernetes.io/master: {} f:priorityClassName: {} f:tolerations: {} f:icon: .: {} f:base64data: {} f:mediatype: {} f:image: {} f:priority: {} f:publisher: {} f:sourceType: {} f:updateStrategy: .: {} f:registryPoll: .: {} f:interval: {} manager: marketplace-operator operation: Update time: "2022-06-22T09:33:42Z" - apiVersion: operators.coreos.com/v1alpha1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:connectionState: .: {} f:address: {} f:lastConnect: {} f:lastObservedState: {} f:latestImageRegistryPoll: {} f:registryService: .: {} f:createdAt: {} f:port: {} f:protocol: {} f:serviceName: {} f:serviceNamespace: {} manager: catalog operation: Update subresource: status time: "2022-06-22T09:44:12Z" name: redhat-operators namespace: openshift-marketplace resourceVersion: "126807" uid: 55572d42-0055-49ad-aae3-c9a288359170 spec: displayName: Red Hat Operators grpcPodConfig: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/master: "" priorityClassName: system-cluster-critical tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 icon: base64data: "" mediatype: "" image: registry.redhat.io/redhat/redhat-operator-index:v4.10 priority: -100 publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: redhat-operators.openshift-marketplace.svc:50051 lastConnect: "2022-06-22T13:22:07Z" lastObservedState: TRANSIENT_FAILURE latestImageRegistryPoll: "2022-06-22T10:10:17Z" registryService: createdAt: "2022-06-22T09:33:42Z" port: "50051" protocol: grpc serviceName: redhat-operators serviceNamespace: openshift-marketplac
Not seen in recent runs. --> VERIFIED With version: deployer 2.0.10
Closing this bug as fixed in v2.0.10 and tested by QE.