+++ This bug was initially created as a clone of Bug #1719454 +++ Description of problem: If a cluster operator like the marketplace operator has a RelatedObject like the following: { Group: "operators.coreos.com", Resource: "OperatorSource", Namespace: "openshift-marketplace", } running the must-gather tool does not collect all the CRs of that kind. This issue extends to all non-core kinds. Version-Release number of selected component (if applicable): OpenShift 4.1 How reproducible: Always Steps to Reproduce: 1. Add the object in the description to the RelatedObjects field in the marketplace operator's ClusterOperator CR 2. Run "openshift-must-gather inspect clusteroperator/marketplace" Actual results: No resources of that kind are collected Expected results: All the resources of that kind in the namespace should be collected
Blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1726583. When I try to verify , I met the error: [root@dhcp-140-138 ~]# ./openshift-must-gather inspect clusteroperator/marketplace 2019/07/10 18:13:22 Gathering config.openshift.io resource data... 2019/07/10 18:13:27 Gathering kubeapiserver.operator.openshift.io resource data... 2019/07/10 18:13:27 Gathering cluster operator resource data... 2019/07/10 18:13:27 Gathering related object reference information for ClusterOperator "marketplace"... 2019/07/10 18:13:27 Found related object "namespaces/openshift-marketplace" for ClusterOperator "marketplace"... 2019/07/10 18:13:27 Found related object "OperatorSource.operators.coreos.com" for ClusterOperator "marketplace"... 2019/07/10 18:13:27 Found related object "CatalogSourceConfig.operators.coreos.com" for ClusterOperator "marketplace"... 2019/07/10 18:13:27 Found related object "CatalogSource.operators.coreos.com" for ClusterOperator "marketplace"... 2019/07/10 18:13:27 Gathering data for ns/openshift-marketplace... 2019/07/10 18:13:27 Collecting resources for namespace "openshift-marketplace"... 2019/07/10 18:13:28 Gathering pod data for namespace "openshift-marketplace"... 2019/07/10 18:13:28 Gathering data for pod "certified-operators-7679cf57f6-tmbs8" 2019/07/10 18:13:28 Skipping container data collection for pod "certified-operators-7679cf57f6-tmbs8": Pod not running 2019/07/10 18:13:28 Gathering data for pod "community-operators-8558b56f9c-vj86w" 2019/07/10 18:13:28 Skipping container data collection for pod "community-operators-8558b56f9c-vj86w": Pod not running 2019/07/10 18:13:28 Gathering data for pod "marketplace-operator-5cc7b564c4-btfmz" 2019/07/10 18:13:29 Unable to gather previous container logs: previous terminated container "marketplace-operator" in pod "marketplace-operator-5cc7b564c4-btfmz" not found E0710 18:13:30.781050 17764 portforward.go:331] an error occurred forwarding 37587 -> 60000: error forwarding port 60000 to pod fb877ec23f454dd56bbd67ba1613cad91f0d921475891f509b5d16f00b39fc68, uid : exit status 1: 2019/07/10 10:13:30 socat[44043] E connect(5, AF=2 127.0.0.1:60000, 16): Connection refused E0710 18:13:32.360889 17764 portforward.go:331] an error occurred forwarding 37587 -> 60000: error forwarding port 60000 to pod fb877ec23f454dd56bbd67ba1613cad91f0d921475891f509b5d16f00b39fc68, uid : exit status 1: 2019/07/10 10:13:32 socat[44053] E connect(5, AF=2 127.0.0.1:60000, 16): Connection refused E0710 18:13:33.954772 17764 portforward.go:331] an error occurred forwarding 37587 -> 60000: error forwarding port 60000 to pod fb877ec23f454dd56bbd67ba1613cad91f0d921475891f509b5d16f00b39fc68, uid : exit status 1: 2019/07/10 10:13:33 socat[44152] E connect(5, AF=2 127.0.0.1:60000, 16): Connection refused 2019/07/10 18:13:33 Gathering data for pod "redhat-operators-65f58cc567-bdqpf" 2019/07/10 18:13:33 Skipping container data collection for pod "redhat-operators-65f58cc567-bdqpf": Pod not running Error: one or more errors ocurred while gathering pod-specific data for namespace: openshift-marketplace one or more errors ocurred while gathering container data for pod marketplace-operator-5cc7b564c4-btfmz: [unable to gather container /healthz: Get https://localhost:37587/: EOF, unable to gather container /version: Get https://localhost:37587/: EOF, unable to gather container /metrics: Get https://localhost:37587/metrics: EOF]
I don't see why the "errors" you are seeing is blocking you from testing that the non-core CRs are being collected. Please see https://bugzilla.redhat.com/show_bug.cgi?id=1717439#c5
OK , please ignore the error, Env: Payload: 4.2.0-0.nightly-2019-07-09-222901 Check the openshift-marketplace: [root@dhcp-140-138 must-gather.local.8012847555193219714]# tree namespaces/openshift-marketplace/ namespaces/openshift-marketplace/ ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── events.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── openshift-marketplace.yaml ├── operators.coreos.com │ ├── catalogsources │ │ ├── certified-operators.yaml │ │ ├── community-operators.yaml │ │ └── redhat-operators.yaml │ └── operatorsources │ ├── certified-operators.yaml │ ├── community-operators.yaml │ └── redhat-operators.yaml ├── pods │ ├── certified-operators-7679cf57f6-tmbs8 │ │ └── certified-operators-7679cf57f6-tmbs8.yaml │ ├── community-operators-6977d545-wfcgd │ │ └── community-operators-6977d545-wfcgd.yaml │ ├── marketplace-operator-5cc7b564c4-btfmz │ │ ├── marketplace-operator │ │ │ └── marketplace-operator │ │ │ ├── healthz │ │ │ └── logs │ │ │ ├── current.log │ │ │ └── previous.log │ │ └── marketplace-operator-5cc7b564c4-btfmz.yaml │ └── redhat-operators-65f58cc567-bdqpf │ └── redhat-operators-65f58cc567-bdqpf.yaml └── route.openshift.io └── routes.yaml 20 directories, 31 files
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922