Description of problem: oc clusteroperators -o wide works fine - it shows the clusteroperator status. However, if you set a watch, the wide is ignored: # oc get clusteroperators -o wide -w NAME AGE cluster-autoscaler 28m cluster-storage-operator 26m console 26m dns 41m image-registry 26m ingress 26m kube-apiserver 38m kube-controller-manager 34m kube-scheduler 36m machine-api 29m machine-config 29m marketplace-operator 26m monitoring 26m network 42m node-tuning 26m openshift-apiserver 28m openshift-authentication 30m openshift-cloud-credential-operator 29m openshift-controller-manager 27m openshift-samples 26m operator-lifecycle-manager 28m Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-04-114357 How reproducible: Always
Upping log levels I see instances of: I0402 14:29:59.469792 32068 get.go:707] Unable to convert *unstructured.Unstructured to config.openshift.io/__internal: no kind "ClusterOperator" is registered for version "config.openshift.io/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29" when replicating this problem, which leads to a problem converting to internal version here: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/get/get.go#L703. looking further into this
Also note that this looks similar/duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1690263
clusterversion has the issue too.
Also reported in upstream: https://github.com/kubernetes/kubernetes/issues/66538
*** Bug 1690263 has been marked as a duplicate of this bug. ***
A fix for this has been merged into upstream 1.15 https://github.com/kubernetes/kubernetes/pull/76161
Discussed this during blocker bug call and with Clayton, we'll try to back port this when we bump to k8s 1.14, not sooner. Thus I'm moving target to 4.2 for now.
This will be fixed in https://github.com/openshift/kubernetes/pull/74 and https://github.com/openshift/oc/pull/47
Merged in https://github.com/openshift/oc/pull/63 moving to qa.
Checked in oc extracted from: oc adm release extract --command=oc registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-08-20-213632 ./oc version Client Version: version.Info{Major:"", Minor:"", GitVersion:"v4.2.0-alpha.0-15-g3a38de47", GitCommit:"3a38de47543c64c8b48bcf20e050ae9cadf5f82e", GitTreeState:"clean", BuildDate:"2019-08-20T03:08:41Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0+1984a1a", GitCommit:"1984a1a", GitTreeState:"clean", BuildDate:"2019-08-20T15:21:51Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"} OpenShift Version: 4.2.0-0.nightly-2019-08-20-213632 ./oc get clusteroperators -w NAME AGE authentication 80m cloud-credential 84m ... ./oc get clusteroperators -o wide -w NAME AGE authentication 81m cloud-credential 85m # GitCommit:"3a38de47543c64c8b48bcf20e050ae9cadf5f82e" includes the fix, but above still has the issue git log --date=local 3a38de4 --pretty="%h %an %cd - %s" | grep "#63" 64b971674 OpenShift Merge Robot Tue Aug 20 04:02:45 2019 - Merge pull request #63 from soltysh/bump_k8s
My bad, that previous bump didn't have them. I've opened: - https://github.com/openshift/kubernetes-apimachinery/pull/1 - https://github.com/openshift/kubernetes/pull/78 and will bump oc once these merge.
The changes where picked in https://github.com/openshift/oc/pull/73 moving to modified.
Checked with oc extracted from: oc adm release extract --command=oc registry.svc.ci.openshift.org/ocp/release:4.2.0-0.nightly-2019-08-30-015546 [root@dhcp-140-138 oc-client]# ./oc version Client Version: version.Info{Major:"", Minor:"", GitVersion:"openshift-clients-4.2.0-201908281419", GitCommit:"372c08a6a95e4ecc7c61dd16052fc12f6bca376c", GitTreeState:"clean", BuildDate:"2019-08-28T19:10:30Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0+a91f2ac", GitCommit:"a91f2ac", GitTreeState:"clean", BuildDate:"2019-08-28T23:03:31Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"} OpenShift Version: 4.2.0-0.nightly-2019-08-29-170426 I still could reproduce the issue. [root@dhcp-140-138 oc-client]# ./oc get co -o wide -w NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.2.0-0.nightly-2019-08-29-170426 True False False 3h18m cloud-credential 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m cluster-autoscaler 4.2.0-0.nightly-2019-08-29-170426 True False False 3h23m console 4.2.0-0.nightly-2019-08-29-170426 True False False 3h21m dns 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m image-registry 4.2.0-0.nightly-2019-08-29-170426 True False False 3h23m ingress 4.2.0-0.nightly-2019-08-29-170426 True False False 3h23m insights 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m kube-apiserver 4.2.0-0.nightly-2019-08-29-170426 True False False 3h27m kube-controller-manager 4.2.0-0.nightly-2019-08-29-170426 True False False 3h26m kube-scheduler 4.2.0-0.nightly-2019-08-29-170426 True False False 3h26m machine-api 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m machine-config 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m marketplace 4.2.0-0.nightly-2019-08-29-170426 True False False 3h23m monitoring 4.2.0-0.nightly-2019-08-29-170426 True False False 3h22m network 4.2.0-0.nightly-2019-08-29-170426 True False False 3h28m node-tuning 4.2.0-0.nightly-2019-08-29-170426 True False False 3h25m openshift-apiserver 4.2.0-0.nightly-2019-08-29-170426 True False False 3h25m openshift-controller-manager 4.2.0-0.nightly-2019-08-29-170426 True False False 3h26m openshift-samples 4.2.0-0.nightly-2019-08-29-170426 True False False 3h18m operator-lifecycle-manager 4.2.0-0.nightly-2019-08-29-170426 True False False 3h28m operator-lifecycle-manager-catalog 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m operator-lifecycle-manager-packageserver 4.2.0-0.nightly-2019-08-29-170426 True False False 3h25m service-ca 4.2.0-0.nightly-2019-08-29-170426 True False False 3h29m service-catalog-apiserver 4.2.0-0.nightly-2019-08-29-170426 True False False 3h26m service-catalog-controller-manager 4.2.0-0.nightly-2019-08-29-170426 True False False 3h25m storage 4.2.0-0.nightly-2019-08-29-170426 True False False 3h24m NAME AGE monitoring 3h25m monitoring 3h25m monitoring 3h25m
Confirming comment 14. Originally, the initial display after issuing get with -o wide -w would not show the details. In 4.2.0-0.nightly-2019-08-29-170426 the initial display does show the details, but updates triggered by the watch do not contain the details. Another example: # oc get clusterversion -w -o wide NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-08-15-232721 True True 6m6s Working towards 4.2.0-0.nightly-2019-08-29-170426: 26% complete, waiting on machine-api NAME AGE version 14d version 14d version 14d version 14d version 14d
This fix requires a significant change on the server, that is not going to be delivered in 4.2, moving to 4.3.
This will be fixed when k8s 1.16 bump lands from https://github.com/openshift/oc/pull/102
Confirmed with latest version, the issue has fixed: oc version Client Version: v4.3.0 Server Version: 4.3.0-0.ci-2019-10-09-222432 Kubernetes Version: v1.16.0-beta.2+a696b23 [root@dhcp-140-138 ~]# oc get co -o wide -w NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0-0.ci-2019-10-09-222432 True False False 5h42m cloud-credential 4.3.0-0.ci-2019-10-09-222432 True False False 5h57m cluster-autoscaler 4.3.0-0.ci-2019-10-09-222432 True False False 5h48m console 4.3.0-0.ci-2019-10-09-222432 True False False 5h44m dns 4.3.0-0.ci-2019-10-09-222432 True False False 5h57m image-registry 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m ingress 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m insights 4.3.0-0.ci-2019-10-09-222432 True False False 5h57m kube-apiserver 4.3.0-0.ci-2019-10-09-222432 True False False 5h55m kube-controller-manager 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m kube-scheduler 4.3.0-0.ci-2019-10-09-222432 True False False 5h55m machine-api 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m machine-config 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m marketplace 4.3.0-0.ci-2019-10-09-222432 True False False 5h48m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h47m network 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m node-tuning 4.3.0-0.ci-2019-10-09-222432 True False False 5h53m openshift-apiserver 4.3.0-0.ci-2019-10-09-222432 True False False 5h53m openshift-controller-manager 4.3.0-0.ci-2019-10-09-222432 True False False 5h57m openshift-samples 4.3.0-0.ci-2019-10-09-222432 True False False 5h47m operator-lifecycle-manager 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m operator-lifecycle-manager-catalog 4.3.0-0.ci-2019-10-09-222432 True False False 5h56m operator-lifecycle-manager-packageserver 4.3.0-0.ci-2019-10-09-222432 True False False 5h55m service-ca 4.3.0-0.ci-2019-10-09-222432 True False False 5h57m service-catalog-apiserver 4.3.0-0.ci-2019-10-09-222432 True False False 5h53m service-catalog-controller-manager 4.3.0-0.ci-2019-10-09-222432 True False False 5h53m storage 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h48m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h48m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h48m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m monitoring 4.3.0-0.ci-2019-10-09-222432 True False False 5h49m machine-api 4.3.0-0.ci-2019-10-09-222432 True False False 5h59m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062