Bug 1669992 - [marketplace] The default catalogsource& cm can’t be created by marketplace
Summary: [marketplace] The default catalogsource& cm can’t be created by marketplace
Keywords:
Status: CLOSED DUPLICATE of bug 1666225
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Aravindh Puthiyaparambil
QA Contact: Fan Jia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-28 10:10 UTC by Fan Jia
Modified: 2019-03-12 14:25 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-30 05:40:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Fan Jia 2019-01-28 10:10:36 UTC
Description of problem:
The catalogsource & cm can’t be created by marketplace

Version-Release number of selected component (if applicable):
clusterversion:4.0.0-0.2
marketplace image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b39fe61df5582a5de0de4917dd61546be75d197962fbb46063ae407de9444d5f
olm image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:279937a595bea87b70e5618783b0df6022aac31a9fba7bb104bb88e1a3f6bf79
olm commit:
io.openshift.build.commit.id=1e295784b30a7d54eb2db82b99a9c6307133ebbf

How reproducible:always

Steps to Reproduce:
1. install the cluster 

Actual results:
1.no default catalogsource & cm created,and the marketplace has error message 

#oc logs marketplace-operator-5878c7986c-f8kv4
“E0128 09:07:48.516729       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=43, ErrCode=NO_ERROR, debug=""”E0128 09:07:48.516738       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=43, ErrCode=NO_ERROR, debug=""

Expected results:
1.default catalogsource & cm will be created default,and no error message in marketplace’s pod
#oc get catalogsource -n openshift-marketplaceNAME                  NAME                  TYPE       PUBLISHER   AGE
certified-operators   Certified Operators   internal   Red Hat     6h
community-operators   Community Operators   internal   Red Hat     6h
redhat-operators      Red Hat Operators     internal   Red Hat     6h
#oc get cm -n openshift-marketplace
NAME                  DATA      AGE
certified-operators   3         6h
community-operators   3         6h
redhat-operators      3         6h

Additional info:

Comment 1 Aravindh Puthiyaparambil 2019-01-28 15:20:38 UTC
Jia, this is happening because is CVO is processing other higher priority operators and has not gotten to handling Marketplace yet. There is not much we can do to fix this on the Marketplace side.

Comment 2 Aravindh Puthiyaparambil 2019-01-28 17:56:48 UTC
>“E0128 09:07:48.516729       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=43, ErrCode=NO_ERROR, debug=""”E0128 09:07:48.516738       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=43, ErrCode=NO_ERROR, debug=""

This indicates that something basic in the cluster is not working. For marketplace to work, basic networking, dns, apiserver etc need to functioning.

Comment 3 Jian Zhang 2019-01-29 02:14:00 UTC
The same issue as https://bugzilla.redhat.com/show_bug.cgi?id=1666225#c11

Comment 4 Aravindh Puthiyaparambil 2019-01-29 14:37:57 UTC
 Jian Zhang, Fan Jia, like I mentioned in my previous comment, the root cause of this issue is not the Marketplace operator but some basic functionality in the cluster is not working. There is nothing we can do on the Marketplace side regarding this.

Comment 5 Jian Zhang 2019-01-30 05:40:21 UTC
Aravindh,

Many thanks for your quick response. Yeah, I saw a similar fixed PR on Kubernetes: https://github.com/kubernetes/kubernetes/pull/73277.
We hit this issue more often recently. I already @Clayton on bug 1666225, but no response by now. Do you know who responsible for this kind of issue?
I'd like to label this bug as a duplicate of bug 1666225 for now.

*** This bug has been marked as a duplicate of bug 1666225 ***

Comment 6 Aravindh Puthiyaparambil 2019-01-30 15:13:30 UTC
I am not sure which team is responsible for these sort of bugs. I suggest posting on #4-dev-triage.


Note You need to log in before you can comment on or make changes to this bug.