Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1541146 Details for
Bug 1685704
Need a separate internal trust chain and apiserver name for internal clients on the host network, namely kubelet
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh92 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
Listings
file_1685704.txt (text/plain), 18.35 KB, created by
Justin Pierce
on 2019-03-05 20:47:14 UTC
(
hide
)
Description:
Listings
Filename:
MIME Type:
Creator:
Justin Pierce
Created:
2019-03-05 20:47:14 UTC
Size:
18.35 KB
patch
obsolete
> >[ec2-user@int-3.bastion us-east-2 ~]$ oc get clusteroperators >NAME VERSION AVAILABLE PROGRESSING FAILING SINCE >cluster-autoscaler True False False 25h >console True False False 25h >dns 4.0.0-0.alpha-2019-03-04-160136 True False False 26h >image-registry 4.0.0-87-gbf6c0c9-dirty True False False 25h >ingress v0.0.1 True False False 25h >kube-apiserver 0.0.0_version_cluster-kube-apiserver-operator True True True 25h >kube-controller-manager 0.0.0_version_cluster-kube-controller-manager-operator True False False 25h >kube-scheduler 0.0.0_version_cluster-kube-scheduler-operator True False False 25h >machine-api 4.0.0-0.alpha-2019-03-04-160136 True False False 26h >machine-config 4.0.0-alpha.0-5-gf87d8b8d-dirty True False False 155m >marketplace-operator 0.0.1 True False False 26h >monitoring 4.0.0-0.alpha-2019-03-04-160136 True False False 4h13m >network True False False 12h >node-tuning 4.0.0-0.alpha-2019-03-04-160136 True False False 25h >openshift-apiserver 0.0.0_version_cluster-openshift-apiserver-operator True False False 6h16m >openshift-authentication True False False 25h >openshift-cloud-credential-operator True False False 26h >openshift-controller-manager True False False 25h >openshift-samples 4.0.0-alpha1-709a49010 True False False 26h >operator-lifecycle-manager 0.8.1-638425c True False False 26h >service-ca True False False 25h >storage 4.0.0-0.alpha-2019-03-04-160136 True False False 26h > > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc describe clusteroperator kube-apiserver >Name: kube-apiserver >Namespace: >Labels: <none> >Annotations: <none> >API Version: config.openshift.io/v1 >Kind: ClusterOperator >Metadata: > Creation Timestamp: 2019-03-04T18:53:41Z > Generation: 1 > Resource Version: 1083674 > Self Link: /apis/config.openshift.io/v1/clusteroperators/kube-apiserver > UID: d44a3772-3eae-11e9-a9e6-0a51deeb77aa >Spec: >Status: > Conditions: > Last Transition Time: 2019-03-05T20:10:46Z > Message: NodeInstallerFailing: 0 nodes are failing on revision 60: >NodeInstallerFailing: >StaticPodsFailing: nodes/ip-10-0-163-238.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-163-238.us-east-2.compute.internal container="kube-apiserver-61" is not ready >StaticPodsFailing: nodes/ip-10-0-163-238.us-east-2.compute.internal pods/kube-apiserver-ip-10-0-163-238.us-east-2.compute.internal container="kube-apiserver-61" is waiting: "CrashLoopBackOff" - "Back-off 5m0s restarting failed container=kube-apiserver-61 pod=kube-apiserver-ip-10-0-163-238.us-east-2.compute.internal_openshift-kube-apiserver(73808af914fc3da546d2f139ee664857)" > Reason: MultipleConditionsMatching > Status: True > Type: Failing > Last Transition Time: 2019-03-05T20:04:37Z > Message: Progressing: 3 nodes are at revision 57 > Reason: Progressing > Status: True > Type: Progressing > Last Transition Time: 2019-03-04T18:53:41Z > Message: Available: 3 nodes are active; 3 nodes are at revision 57 > Reason: AsExpected > Status: True > Type: Available > Last Transition Time: 2019-03-04T18:53:41Z > Message: UnsupportedConfigOverridesUpgradeable: setting: [admission.enabledPlugins.0 admission.enabledPlugins.1 admission.pluginConfig.autoscaling.openshift.io/ClusterResourceOverride.configuration.apiVersion admission.pluginConfig.autoscaling.openshift.io/ClusterResourceOverride.configuration.cpuRequestToLimitPercent admission.pluginConfig.autoscaling.openshift.io/ClusterResourceOverride.configuration.kind admission.pluginConfig.autoscaling.openshift.io/ClusterResourceOverride.configuration.limitCPUToMemoryPercent admission.pluginConfig.autoscaling.openshift.io/ClusterResourceOverride.configuration.memoryRequestToLimitPercent admission.pluginConfig.autoscaling.openshift.io/RunOnceDuration.configuration.activeDeadlineSecondsLimit admission.pluginConfig.autoscaling.openshift.io/RunOnceDuration.configuration.apiVersion admission.pluginConfig.autoscaling.openshift.io/RunOnceDuration.configuration.kind apiVersion kind] > Reason: UnsupportedConfigOverridesUpgradeable > Status: False > Type: Upgradeable > Extension: <nil> > Related Objects: > Group: operator.openshift.io > Name: cluster > Resource: kubeapiservers > Group: > Name: openshift-config > Resource: namespaces > Group: > Name: openshift-config-managed > Resource: namespaces > Group: > Name: openshift-kube-apiserver-operator > Resource: namespaces > Group: > Name: openshift-kube-apiserver > Resource: namespaces > Versions: > Name: kube-apiserver > Version: 0.0.0_version_hypershift > Name: operator > Version: 0.0.0_version_cluster-kube-apiserver-operator >Events: <none> > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc project openshift-kube-apiserver >Now using project "openshift-kube-apiserver" on server "https://api.int-3.online-starter.openshift.com:6443". >[ec2-user@int-3.bastion us-east-2 ~]$ oc get pods >NAME READY STATUS RESTARTS AGE >installer-49-ip-10-0-137-75.us-east-2.compute.internal 0/1 Completed 0 4h21m >installer-49-ip-10-0-147-12.us-east-2.compute.internal 0/1 Completed 0 4h22m >installer-49-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 4h23m >installer-51-ip-10-0-137-75.us-east-2.compute.internal 0/1 Completed 0 4h15m >installer-51-ip-10-0-147-12.us-east-2.compute.internal 0/1 Completed 0 4h15m >installer-51-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 4h16m >installer-53-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 159m >installer-54-ip-10-0-137-75.us-east-2.compute.internal 0/1 Completed 0 156m >installer-54-ip-10-0-147-12.us-east-2.compute.internal 0/1 Completed 0 157m >installer-54-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 158m >installer-55-ip-10-0-137-75.us-east-2.compute.internal 0/1 Completed 0 141m >installer-55-ip-10-0-147-12.us-east-2.compute.internal 0/1 Completed 0 142m >installer-55-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 143m >installer-56-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 137m >installer-57-ip-10-0-137-75.us-east-2.compute.internal 0/1 Completed 0 134m >installer-57-ip-10-0-147-12.us-east-2.compute.internal 0/1 Completed 0 135m >installer-57-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 136m >installer-58-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 34m >installer-59-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 23m >installer-60-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 17m >installer-61-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 16m >kube-apiserver-ip-10-0-137-75.us-east-2.compute.internal 1/1 Running 2 134m >kube-apiserver-ip-10-0-147-12.us-east-2.compute.internal 1/1 Running 2 135m >kube-apiserver-ip-10-0-163-238.us-east-2.compute.internal 0/1 CrashLoopBackOff 8 16m >revision-pruner-49-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 4h23m >revision-pruner-51-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 4h16m >revision-pruner-53-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 159m >revision-pruner-54-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 158m >revision-pruner-55-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 143m >revision-pruner-56-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 137m >revision-pruner-57-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 136m >revision-pruner-58-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 34m >revision-pruner-59-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 23m >revision-pruner-60-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 17m >revision-pruner-61-ip-10-0-163-238.us-east-2.compute.internal 0/1 Completed 0 16m > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc logs kube-apiserver-ip-10-0-163-238.us-east-2.compute.internal >+ mkdir -p /var/log/kube-apiserver >+ chmod 0700 /var/log/kube-apiserver >+ exec hypershift openshift-kube-apiserver --config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml >Flag --insecure-port has been deprecated, This flag will be removed in a future version. >I0305 20:38:49.818576 1 server.go:61] `kube-apiserver [--admission-control-config-file=/tmp/kubeapiserver-admission-config.yaml284964261 --allow-privileged=true --anonymous-auth=false --audit-log-format=json --audit-log-maxage=0 --audit-log-maxbackup=10 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver/audit.log --audit-policy-file=openshift.local.audit/policy.yaml --audit-webhook-config-file= --authorization-mode=RBAC --authorization-mode=Node --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/static-pod-resources/configmaps/client-ca/ca-bundle.crt --cors-allowed-origins=//127\.0\.0\.1(:|$) --cors-allowed-origins=//localhost(:|$) --enable-admission-plugins=autoscaling.openshift.io/ClusterResourceOverride --enable-admission-plugins=autoscaling.openshift.io/RunOnceDuration --enable-aggregator-routing=true --enable-logs-handler=false --enable-swagger-ui=true --endpoint-reconciler-type=lease --etcd-cafile=/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt --etcd-certfile=/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt --etcd-keyfile=/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key --etcd-prefix=openshift.io --etcd-servers=https://etcd-0.int-3.online-starter.openshift.com:2379 --etcd-servers=https://etcd-1.int-3.online-starter.openshift.com:2379 --etcd-servers=https://etcd-2.int-3.online-starter.openshift.com:2379 --event-ttl=3h --feature-gates=PersistentLocalVolumes=false --insecure-port=0 --kubelet-certificate-authority=/etc/kubernetes/static-pod-resources/configmaps/kubelet-serving-ca/ca-bundle.crt --kubelet-client-certificate=/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.crt --kubelet-client-key=/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.key --kubelet-https=true --kubelet-preferred-address-types=Hostname --kubelet-preferred-address-types=InternalIP --kubelet-preferred-address-types=ExternalIP --kubelet-read-only-port=0 --kubernetes-service-node-port=0 --max-mutating-requests-inflight=600 --max-requests-inflight=1200 --min-request-timeout=3600 --minimal-shutdown-duration=3s --proxy-client-cert-file=/etc/kubernetes/static-pod-resources/secrets/aggregator-client/tls.crt --proxy-client-key-file=/etc/kubernetes/static-pod-resources/secrets/aggregator-client/tls.key --requestheader-allowed-names=kube-apiserver-proxy --requestheader-allowed-names=system:kube-apiserver-proxy --requestheader-allowed-names=system:openshift-aggregator --requestheader-client-ca-file=/etc/kubernetes/static-pod-resources/configmaps/aggregator-client-ca/ca-bundle.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-cluster-ip-range=172.30.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --storage-media-type=application/vnd.kubernetes.protobuf --tls-cert-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_128_GCM_SHA256 --tls-cipher-suites=TLS_RSA_WITH_AES_256_GCM_SHA384 --tls-cipher-suites=TLS_RSA_WITH_AES_128_CBC_SHA --tls-cipher-suites=TLS_RSA_WITH_AES_256_CBC_SHA --tls-min-version=VersionTLS12 --tls-private-key-file=/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key --tls-sni-cert-key=/etc/kubernetes/static-pod-resources/secrets/user-serving-cert-000/tls.crt,/etc/kubernetes/static-pod-resources/secrets/user-serving-cert-000/tls.key:api.int-3.online-starter.openshift.com]` >I0305 20:38:49.819295 1 server.go:692] external host was not specified, using 10.0.163.238 >I0305 20:38:49.819702 1 server.go:152] Version: v1.12.4+761b685 >F0305 20:38:49.820257 1 cmd.go:71] failed to load SNI cert and key: open /etc/kubernetes/static-pod-resources/secrets/user-serving-cert-000/tls.crt: no such file or directory > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc get secrets -n openshift-config >NAME TYPE DATA AGE >api-certs kubernetes.io/tls 2 39m >builder-dockercfg-8qk88 kubernetes.io/dockercfg 1 26h >.... > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc get secrets -n openshift-kube-apiserver-operator >NAME TYPE DATA AGE >aggregator-client-signer SecretTypeTLS 2 26h >builder-dockercfg-sf6rf kubernetes.io/dockercfg 1 26h >builder-token-q9zl8 kubernetes.io/service-account-token 3 26h >builder-token-x69xw kubernetes.io/service-account-token 3 26h >default-dockercfg-qcxlw kubernetes.io/dockercfg 1 26h >default-token-jkhp5 kubernetes.io/service-account-token 3 26h >default-token-mq8x4 kubernetes.io/service-account-token 3 26h >deployer-dockercfg-rmhqf kubernetes.io/dockercfg 1 26h >deployer-token-hk56j kubernetes.io/service-account-token 3 26h >deployer-token-wqjcv kubernetes.io/service-account-token 3 26h >kube-apiserver-operator-dockercfg-lptpx kubernetes.io/dockercfg 1 26h >kube-apiserver-operator-serving-cert kubernetes.io/tls 2 26h >kube-apiserver-operator-token-g6gbw kubernetes.io/service-account-token 3 26h >kube-apiserver-operator-token-tgcnd kubernetes.io/service-account-token 3 26h >kube-control-plane-signer SecretTypeTLS 2 26h >loadbalancer-serving-signer SecretTypeTLS 2 26h >localhost-serving-signer SecretTypeTLS 2 26h >service-network-serving-signer SecretTypeTLS 2 26h >user-serving-cert-000 kubernetes.io/tls 2 39m > > > >[ec2-user@int-3.bastion us-east-2 ~]$ oc describe secrets -n openshift-config api-certs >Name: api-certs >Namespace: openshift-config > >Type: kubernetes.io/tls > >Data >==== >tls.key: 1675 bytes >tls.crt: 3681 bytes >[ec2-user@int-3.bastion us-east-2 ~]$ oc describe secrets -n openshift-kube-apiserver-operator user-serving-cert-000 >Name: user-serving-cert-000 >Namespace: openshift-kube-apiserver-operator > >Type: kubernetes.io/tls > >Data >==== >tls.key: 1675 bytes >tls.crt: 3681 bytes >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1685704
: 1541146