https://github.com/openshift/installer/pull/2286 and https://github.com/openshift/cluster-network-operator/pull/304 should fix this bug for AWS clusters. These PRs add the node subdomain, which is used by api-server when fetching logs. Is there a standard naming convention for nodes in vsphere and baremetal clusters?
https://github.com/openshift/cluster-kube-apiserver-operator/pull/559 is being considered to address this bug for all provider types.
https://github.com/openshift/cluster-kube-apiserver-operator/pull/559 has merged and should fix this bug.
From https://bugzilla.redhat.com/show_bug.cgi?id=1748271 it appears that even after PR 559, accessing container logs can still be an issue for bare metal deployments. Machine IP addresses must be assigned from the install config machineCIDR. You can verify the machineCIDR with: $ oc get cm/cluster-config-v1 -n kube-system -o yaml If your machines are not assigned addresses from machineCIDR, then you must manually specify the IP addresses or network using noProxy of proxy "cluster".
To verify: Check to see if the node `InternalIP` network (10.0.0.0/16 by default) is set in install-config: $ oc get cm/cluster-config-v1 -n kube-system -o yaml | grep machineCIDR machineCIDR: 10.0.0.0/16 Check if the InternalIP of nodes is configured from machineCIDR: $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-140-169.us-west-2.compute.internal Ready worker 100m v1.14.0+99050efaa 10.0.140.169 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 ip-10-0-142-49.us-west-2.compute.internal Ready master 108m v1.14.0+99050efaa 10.0.142.49 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 ip-10-0-146-109.us-west-2.compute.internal Ready worker 100m v1.14.0+99050efaa 10.0.146.109 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 ip-10-0-155-227.us-west-2.compute.internal Ready master 108m v1.14.0+99050efaa 10.0.155.227 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 ip-10-0-161-226.us-west-2.compute.internal Ready master 108m v1.14.0+99050efaa 10.0.161.226 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 ip-10-0-171-76.us-west-2.compute.internal Ready worker 95m v1.14.0+99050efaa 10.0.171.76 <none> Red Hat Enterprise Linux CoreOS 42.80.20190903.1 (Ootpa) 4.18.0-80.7.2.el8_0.x86_64 cri-o://1.14.10-0.8.dev.rhaos4.2.gitaf00350.el8 Verify that status.noProxy machineCIDR contains the machineCIDR: $ oc get proxy/cluster -o yaml apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: "2019-09-03T20:32:34Z" generation: 1 name: cluster resourceVersion: "432" selfLink: /apis/config.openshift.io/v1/proxies/cluster uid: f6149e65-ce89-11e9-b324-024e6a40328c spec: httpProxy: http://jcallen:6cpbEH6uCepwEhNr2iB05ixP@52.73.102.120:3129 httpsProxy: http://jcallen:6cpbEH6uCepwEhNr2iB05ixP@52.73.102.120:3129 trustedCA: name: user-ca-bundle status: httpProxy: http://jcallen:6cpbEH6uCepwEhNr2iB05ixP@52.73.102.120:3129 httpsProxy: http://jcallen:6cpbEH6uCepwEhNr2iB05ixP@52.73.102.120:3129 noProxy: .cluster.local,.svc,.us-west-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.latest-proxy.devcluster.openshift.com,api.latest-proxy.devcluster.openshift.com,etcd-0.latest-proxy.devcluster.openshift.com,etcd-1.latest-proxy.devcluster.openshift.com,etcd-2.latest-proxy.devcluster.openshift.com,localhost If it doesn't, then follow one of these steps: https://github.com/openshift/openshift-docs/pull/16392#issuecomment-527558862
Follow the last comment steps and this is not reproduced. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-09-08-232045 True False 152m Cluster version is 4.2.0-0.nightly-2019-09-08-232045
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922