Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1756196 Details for
Bug 1927263
kubelet service takes around 43 secs to start container when started from stopped state
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh90 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
kubelet journal logs for 4.7.0-rc.0
kubelet_journal.log (text/plain), 280.50 KB, created by
Praveen Kumar
on 2021-02-10 12:16:20 UTC
(
hide
)
Description:
kubelet journal logs for 4.7.0-rc.0
Filename:
MIME Type:
Creator:
Praveen Kumar
Created:
2021-02-10 12:16:20 UTC
Size:
280.50 KB
patch
obsolete
>-- Logs begin at Tue 2021-02-09 16:55:49 UTC, end at Wed 2021-02-10 10:59:42 UTC. -- >Feb 10 10:57:39 crc-q4g5s-master-0 systemd[1]: Starting Kubernetes Kubelet... >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609181 2738 flags.go:59] FLAG: --add-dir-header="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609319 2738 flags.go:59] FLAG: --address="0.0.0.0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609324 2738 flags.go:59] FLAG: --allowed-unsafe-sysctls="[]" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609328 2738 flags.go:59] FLAG: --alsologtostderr="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609332 2738 flags.go:59] FLAG: --anonymous-auth="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609335 2738 flags.go:59] FLAG: --application-metrics-count-limit="100" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609338 2738 flags.go:59] FLAG: --authentication-token-webhook="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609341 2738 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl="2m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609345 2738 flags.go:59] FLAG: --authorization-mode="AlwaysAllow" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609348 2738 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609351 2738 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609354 2738 flags.go:59] FLAG: --azure-container-registry-config="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609356 2738 flags.go:59] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609359 2738 flags.go:59] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609362 2738 flags.go:59] FLAG: --cert-dir="/var/lib/kubelet/pki" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609365 2738 flags.go:59] FLAG: --cgroup-driver="cgroupfs" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609368 2738 flags.go:59] FLAG: --cgroup-root="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609373 2738 flags.go:59] FLAG: --cgroups-per-qos="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609375 2738 flags.go:59] FLAG: --chaos-chance="0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609379 2738 flags.go:59] FLAG: --client-ca-file="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609382 2738 flags.go:59] FLAG: --cloud-config="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609384 2738 flags.go:59] FLAG: --cloud-provider="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609387 2738 flags.go:59] FLAG: --cluster-dns="[]" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609391 2738 flags.go:59] FLAG: --cluster-domain="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609393 2738 flags.go:59] FLAG: --cni-bin-dir="/opt/cni/bin" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609396 2738 flags.go:59] FLAG: --cni-cache-dir="/var/lib/cni/cache" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609399 2738 flags.go:59] FLAG: --cni-conf-dir="/etc/cni/net.d" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609402 2738 flags.go:59] FLAG: --config="/etc/kubernetes/kubelet.conf" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609404 2738 flags.go:59] FLAG: --container-hints="/etc/cadvisor/container_hints.json" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609408 2738 flags.go:59] FLAG: --container-log-max-files="5" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609411 2738 flags.go:59] FLAG: --container-log-max-size="10Mi" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609414 2738 flags.go:59] FLAG: --container-runtime="remote" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609416 2738 flags.go:59] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609436 2738 flags.go:59] FLAG: --containerd="/run/containerd/containerd.sock" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609440 2738 flags.go:59] FLAG: --containerd-namespace="k8s.io" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609443 2738 flags.go:59] FLAG: --contention-profiling="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609446 2738 flags.go:59] FLAG: --cpu-cfs-quota="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609448 2738 flags.go:59] FLAG: --cpu-cfs-quota-period="100ms" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609451 2738 flags.go:59] FLAG: --cpu-manager-policy="none" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609454 2738 flags.go:59] FLAG: --cpu-manager-reconcile-period="10s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609457 2738 flags.go:59] FLAG: --docker="unix:///var/run/docker.sock" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609460 2738 flags.go:59] FLAG: --docker-endpoint="unix:///var/run/docker.sock" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609463 2738 flags.go:59] FLAG: --docker-env-metadata-whitelist="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609465 2738 flags.go:59] FLAG: --docker-only="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609468 2738 flags.go:59] FLAG: --docker-root="/var/lib/docker" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609471 2738 flags.go:59] FLAG: --docker-tls="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609474 2738 flags.go:59] FLAG: --docker-tls-ca="ca.pem" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609476 2738 flags.go:59] FLAG: --docker-tls-cert="cert.pem" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609479 2738 flags.go:59] FLAG: --docker-tls-key="key.pem" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609483 2738 flags.go:59] FLAG: --dynamic-config-dir="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609487 2738 flags.go:59] FLAG: --enable-cadvisor-json-endpoints="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609489 2738 flags.go:59] FLAG: --enable-controller-attach-detach="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609492 2738 flags.go:59] FLAG: --enable-debugging-handlers="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609494 2738 flags.go:59] FLAG: --enable-load-reader="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609497 2738 flags.go:59] FLAG: --enable-server="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609500 2738 flags.go:59] FLAG: --enforce-node-allocatable="[pods]" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609503 2738 flags.go:59] FLAG: --event-burst="10" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609506 2738 flags.go:59] FLAG: --event-qps="5" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609509 2738 flags.go:59] FLAG: --event-storage-age-limit="default=0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609512 2738 flags.go:59] FLAG: --event-storage-event-limit="default=0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609514 2738 flags.go:59] FLAG: --eviction-hard="imagefs.available<15%,memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5%" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609523 2738 flags.go:59] FLAG: --eviction-max-pod-grace-period="0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609526 2738 flags.go:59] FLAG: --eviction-minimum-reclaim="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609530 2738 flags.go:59] FLAG: --eviction-pressure-transition-period="5m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609533 2738 flags.go:59] FLAG: --eviction-soft="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609537 2738 flags.go:59] FLAG: --eviction-soft-grace-period="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609539 2738 flags.go:59] FLAG: --exit-on-lock-contention="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609542 2738 flags.go:59] FLAG: --experimental-allocatable-ignore-eviction="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609545 2738 flags.go:59] FLAG: --experimental-bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609554 2738 flags.go:59] FLAG: --experimental-check-node-capabilities-before-mount="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609557 2738 flags.go:59] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609560 2738 flags.go:59] FLAG: --experimental-kernel-memcg-notification="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609563 2738 flags.go:59] FLAG: --experimental-logging-sanitization="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609565 2738 flags.go:59] FLAG: --experimental-mounter-path="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609568 2738 flags.go:59] FLAG: --fail-swap-on="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609570 2738 flags.go:59] FLAG: --feature-gates="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609574 2738 flags.go:59] FLAG: --file-check-frequency="20s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609578 2738 flags.go:59] FLAG: --global-housekeeping-interval="1m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609581 2738 flags.go:59] FLAG: --hairpin-mode="promiscuous-bridge" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609584 2738 flags.go:59] FLAG: --healthz-bind-address="127.0.0.1" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609587 2738 flags.go:59] FLAG: --healthz-port="10248" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609591 2738 flags.go:59] FLAG: --help="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609594 2738 flags.go:59] FLAG: --hostname-override="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609596 2738 flags.go:59] FLAG: --housekeeping-interval="10s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609599 2738 flags.go:59] FLAG: --http-check-frequency="20s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609602 2738 flags.go:59] FLAG: --image-credential-provider-bin-dir="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609604 2738 flags.go:59] FLAG: --image-credential-provider-config="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609607 2738 flags.go:59] FLAG: --image-gc-high-threshold="85" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609609 2738 flags.go:59] FLAG: --image-gc-low-threshold="80" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609612 2738 flags.go:59] FLAG: --image-pull-progress-deadline="1m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609615 2738 flags.go:59] FLAG: --image-service-endpoint="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609617 2738 flags.go:59] FLAG: --iptables-drop-bit="15" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609620 2738 flags.go:59] FLAG: --iptables-masquerade-bit="14" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609622 2738 flags.go:59] FLAG: --keep-terminated-pod-volumes="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609625 2738 flags.go:59] FLAG: --kernel-memcg-notification="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609627 2738 flags.go:59] FLAG: --kube-api-burst="10" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609630 2738 flags.go:59] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609635 2738 flags.go:59] FLAG: --kube-api-qps="5" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609638 2738 flags.go:59] FLAG: --kube-reserved="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609640 2738 flags.go:59] FLAG: --kube-reserved-cgroup="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609643 2738 flags.go:59] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609646 2738 flags.go:59] FLAG: --kubelet-cgroups="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609648 2738 flags.go:59] FLAG: --lock-file="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609650 2738 flags.go:59] FLAG: --log-backtrace-at=":0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609654 2738 flags.go:59] FLAG: --log-cadvisor-usage="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609656 2738 flags.go:59] FLAG: --log-dir="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609659 2738 flags.go:59] FLAG: --log-file="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609661 2738 flags.go:59] FLAG: --log-file-max-size="1800" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609664 2738 flags.go:59] FLAG: --log-flush-frequency="5s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609669 2738 flags.go:59] FLAG: --logging-format="text" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609671 2738 flags.go:59] FLAG: --logtostderr="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609674 2738 flags.go:59] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609677 2738 flags.go:59] FLAG: --make-iptables-util-chains="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609681 2738 flags.go:59] FLAG: --manifest-url="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609684 2738 flags.go:59] FLAG: --manifest-url-header="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609687 2738 flags.go:59] FLAG: --master-service-namespace="default" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609691 2738 flags.go:59] FLAG: --max-open-files="1000000" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609695 2738 flags.go:59] FLAG: --max-pods="110" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609698 2738 flags.go:59] FLAG: --maximum-dead-containers="-1" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609700 2738 flags.go:59] FLAG: --maximum-dead-containers-per-container="1" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609703 2738 flags.go:59] FLAG: --minimum-container-ttl-duration="6m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609706 2738 flags.go:59] FLAG: --minimum-image-ttl-duration="2m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609708 2738 flags.go:59] FLAG: --network-plugin="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609711 2738 flags.go:59] FLAG: --network-plugin-mtu="0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609713 2738 flags.go:59] FLAG: --node-ip="192.168.126.11" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609716 2738 flags.go:59] FLAG: --node-labels="node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609722 2738 flags.go:59] FLAG: --node-status-max-images="50" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609724 2738 flags.go:59] FLAG: --node-status-update-frequency="10s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609727 2738 flags.go:59] FLAG: --non-masquerade-cidr="10.0.0.0/8" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609731 2738 flags.go:59] FLAG: --one-output="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609734 2738 flags.go:59] FLAG: --oom-score-adj="-999" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609737 2738 flags.go:59] FLAG: --pod-cidr="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609739 2738 flags.go:59] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc07eb71007797f915b48376b127c2e01ee40fe723616b3fbae1cdc4f90f241f" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609744 2738 flags.go:59] FLAG: --pod-manifest-path="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609747 2738 flags.go:59] FLAG: --pod-max-pids="-1" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609749 2738 flags.go:59] FLAG: --pods-per-core="0" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609752 2738 flags.go:59] FLAG: --port="10250" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609755 2738 flags.go:59] FLAG: --protect-kernel-defaults="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609758 2738 flags.go:59] FLAG: --provider-id="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609760 2738 flags.go:59] FLAG: --qos-reserved="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609763 2738 flags.go:59] FLAG: --read-only-port="10255" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609765 2738 flags.go:59] FLAG: --really-crash-for-testing="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609768 2738 flags.go:59] FLAG: --redirect-container-streaming="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609771 2738 flags.go:59] FLAG: --register-node="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609773 2738 flags.go:59] FLAG: --register-schedulable="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609777 2738 flags.go:59] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609788 2738 flags.go:59] FLAG: --registry-burst="10" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609791 2738 flags.go:59] FLAG: --registry-qps="5" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609793 2738 flags.go:59] FLAG: --reserved-cpus="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609796 2738 flags.go:59] FLAG: --resolv-conf="/etc/resolv.conf" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609798 2738 flags.go:59] FLAG: --root-dir="/var/lib/kubelet" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609801 2738 flags.go:59] FLAG: --rotate-certificates="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609804 2738 flags.go:59] FLAG: --rotate-server-certificates="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609806 2738 flags.go:59] FLAG: --runonce="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609809 2738 flags.go:59] FLAG: --runtime-cgroups="/system.slice/crio.service" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609813 2738 flags.go:59] FLAG: --runtime-request-timeout="2m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609815 2738 flags.go:59] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609818 2738 flags.go:59] FLAG: --serialize-image-pulls="true" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609821 2738 flags.go:59] FLAG: --skip-headers="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609823 2738 flags.go:59] FLAG: --skip-log-headers="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609826 2738 flags.go:59] FLAG: --stderrthreshold="2" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609830 2738 flags.go:59] FLAG: --storage-driver-buffer-duration="1m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609833 2738 flags.go:59] FLAG: --storage-driver-db="cadvisor" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609836 2738 flags.go:59] FLAG: --storage-driver-host="localhost:8086" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609839 2738 flags.go:59] FLAG: --storage-driver-password="root" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609841 2738 flags.go:59] FLAG: --storage-driver-secure="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609844 2738 flags.go:59] FLAG: --storage-driver-table="stats" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609847 2738 flags.go:59] FLAG: --storage-driver-user="root" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609849 2738 flags.go:59] FLAG: --streaming-connection-idle-timeout="4h0m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609852 2738 flags.go:59] FLAG: --sync-frequency="1m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609855 2738 flags.go:59] FLAG: --system-cgroups="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609857 2738 flags.go:59] FLAG: --system-reserved="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609860 2738 flags.go:59] FLAG: --system-reserved-cgroup="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609862 2738 flags.go:59] FLAG: --tls-cert-file="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609865 2738 flags.go:59] FLAG: --tls-cipher-suites="[]" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609869 2738 flags.go:59] FLAG: --tls-min-version="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609872 2738 flags.go:59] FLAG: --tls-private-key-file="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609875 2738 flags.go:59] FLAG: --topology-manager-policy="none" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609879 2738 flags.go:59] FLAG: --topology-manager-scope="container" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609881 2738 flags.go:59] FLAG: --v="2" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.609884 2738 flags.go:59] FLAG: --version="false" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.610063 2738 flags.go:59] FLAG: --vmodule="" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.610067 2738 flags.go:59] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.610070 2738 flags.go:59] FLAG: --volume-stats-agg-period="1m0s" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.610133 2738 feature_gate.go:244] feature gates: &{map[]} >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.610164 2738 server.go:191] Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.611910 2738 feature_gate.go:236] Setting GA feature gate SCTPSupport=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.611916 2738 feature_gate.go:236] Setting GA feature gate SupportPodPidsLimit=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.611920 2738 feature_gate.go:244] feature gates: &{map[APIPriorityAndFairness:true LegacyNodeRoleBehavior:false NodeDisruptionExclusion:true RemoveSelfLink:false RotateKubeletServerCertificate:true SCTPSupport:true ServiceNodeExclusion:true SupportPodPidsLimit:true]} >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.611965 2738 feature_gate.go:236] Setting GA feature gate SCTPSupport=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.611971 2738 feature_gate.go:236] Setting GA feature gate SupportPodPidsLimit=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.611976 2738 feature_gate.go:244] feature gates: &{map[APIPriorityAndFairness:true LegacyNodeRoleBehavior:false NodeDisruptionExclusion:true RemoveSelfLink:false RotateKubeletServerCertificate:true SCTPSupport:true ServiceNodeExclusion:true SupportPodPidsLimit:true]} >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.620862 2738 mount_linux.go:202] Detected OS with systemd >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.621460 2738 server.go:416] Version: v1.20.0+ba45583 >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.621592 2738 feature_gate.go:236] Setting GA feature gate SCTPSupport=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.621662 2738 feature_gate.go:236] Setting GA feature gate SupportPodPidsLimit=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.621715 2738 feature_gate.go:244] feature gates: &{map[APIPriorityAndFairness:true LegacyNodeRoleBehavior:false NodeDisruptionExclusion:true RemoveSelfLink:false RotateKubeletServerCertificate:true SCTPSupport:true ServiceNodeExclusion:true SupportPodPidsLimit:true]} >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.621839 2738 feature_gate.go:236] Setting GA feature gate SCTPSupport=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:39.621892 2738 feature_gate.go:236] Setting GA feature gate SupportPodPidsLimit=true. It will be removed in a future release. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.621943 2738 feature_gate.go:244] feature gates: &{map[APIPriorityAndFairness:true LegacyNodeRoleBehavior:false NodeDisruptionExclusion:true RemoveSelfLink:false RotateKubeletServerCertificate:true SCTPSupport:true ServiceNodeExclusion:true SupportPodPidsLimit:true]} >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.622082 2738 server.go:837] Client rotation is on, will bootstrap in background >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635075 2738 bootstrap.go:84] Current kubeconfig file contents are still valid, no bootstrap necessary >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635190 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635596 2738 server.go:881] Starting client certificate rotation. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635617 2738 certificate_manager.go:282] Certificate rotation is enabled. >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635933 2738 certificate_manager.go:556] Certificate expiration is 2021-03-11 16:07:56 +0000 UTC, rotation deadline is 2021-03-06 20:45:19.857786879 +0000 UTC >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.635989 2738 certificate_manager.go:288] Waiting 585h47m40.221801541s for next certificate rotation >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.640613 2738 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.640878 2738 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/kubelet-ca.crt >Feb 10 10:57:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:39.641162 2738 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.654098 2738 fs.go:127] Filesystem UUIDs: map[3565a070-884c-4194-ade3-3f5caa25f0ce:/dev/vda4 47bd1a98-afda-4c7e-b19a-297af5e7208f:/dev/vda3 F811-ED3D:/dev/vda2] >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.654668 2738 fs.go:128] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/netns:{mountpoint:/run/netns major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:55 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/2a2510a4891f1dadbf39dd4b53ef532aad5d2ec13c4fedfa28b4db8ef0440f58/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/2a2510a4891f1dadbf39dd4b53ef532aad5d2ec13c4fedfa28b4db8ef0440f58/userdata/shm major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/c627e4d6cf41bddf8b5d15ed4f300f2186227f9b18487329d375caf99521a69f/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/c627e4d6cf41bddf8b5d15ed4f300f2186227f9b18487329d375caf99521a69f/userdata/shm major:0 minor:56 fsType:tmpfs blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/c3553935d2c05240eba62a63d0e7d046e4b25f0d150b33de1016e2b82e8a5ab3/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-57:{mountpoint:/var/lib/containers/storage/overlay/8b70686fd1616ffdac7e0227aa6eebea63461c38d8ac4f92aa169c433dbf699b/merged major:0 minor:57 fsType:overlay blockSize:0}] >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.655695 2738 nvidia.go:61] NVIDIA setup failed: no NVIDIA devices found >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.659266 2738 manager.go:213] Machine: {Timestamp:2021-02-10 10:57:44.658914352 +0000 UTC m=+5.157891019 NumCores:4 NumPhysicalCores:1 NumSockets:4 CpuFrequency:2111998 MemoryCapacity:9403826176 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:a80b6f3ffd2744839ae55cea47bd3fea SystemUUID:625f8576-bc8e-41cd-a481-b50ae44affb1 BootID:d2c2ee2d-8b4f-438d-95d7-627baa89098e Filesystems:[{Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:4701913088 Type:vfs Inodes:1147928 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:32737570816 Type:vfs Inodes:15990208 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:381549568 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/netns DeviceMajor:0 DeviceMinor:24 Capacity:4701913088 Type:vfs Inodes:1147928 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/c627e4d6cf41bddf8b5d15ed4f300f2186227f9b18487329d375caf99521a69f/userdata/shm DeviceMajor:0 DeviceMinor:56 Capacity:65536000 Type:vfs Inodes:1147928 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:55 Capacity:940380160 Type:vfs Inodes:1147928 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:4701913088 Type:vfs Inodes:1147928 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:4701913088 Type:vfs Inodes:1147928 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:32737570816 Type:vfs Inodes:15990208 HasInodes:true} {Device:overlay_0-57 DeviceMajor:0 DeviceMinor:57 Capacity:32737570816 Type:vfs Inodes:15990208 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:42 Capacity:4701913088 Type:vfs Inodes:1147928 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/2a2510a4891f1dadbf39dd4b53ef532aad5d2ec13c4fedfa28b4db8ef0440f58/userdata/shm DeviceMajor:0 DeviceMinor:47 Capacity:65536000 Type:vfs Inodes:1147928 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:33285996544 Scheduler:mq-deadline}] NetworkDevices:[{Name:br0 MacAddress:36:e7:9c:28:72:49 Speed:0 Mtu:1400} {Name:cni-podman0 MacAddress:b2:14:03:b3:11:7e Speed:0 Mtu:1500} {Name:enp1s0 MacAddress:52:fd:fc:07:21:82 Speed:-1 Mtu:1500} {Name:eth10 MacAddress:6a:d6:a6:74:21:7b Speed:0 Mtu:1500} {Name:ovs-system MacAddress:aa:5a:8f:dd:cc:1b Speed:0 Mtu:1500} {Name:tun0 MacAddress:6e:52:94:e7:83:2e Speed:0 Mtu:1400} {Name:vxlan_sys_4789 MacAddress:d6:40:ee:f2:7d:22 Speed:-1 Mtu:65000}] Topology:[{Id:0 Memory:9391226880 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 1 2 3] Caches:[] SocketID:3}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.659540 2738 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.659809 2738 manager.go:229] Version: {KernelVersion:4.18.0-240.10.1.el8_3.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 47.83.202102060438-0 (Ootpa) DockerVersion:Unknown DockerAPIVersion:Unknown CadvisorVersion: CadvisorRevision:} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.660720 2738 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: [] >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.660757 2738 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName:/system.slice KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.661114 2738 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.661147 2738 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.661155 2738 container_manager_linux.go:315] Creating device plugin manager: true >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.661459 2738 manager.go:133] Creating Device Plugin manager at /var/lib/kubelet/device-plugins/kubelet.sock >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:44.661703 2738 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock". >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662084 2738 remote_runtime.go:62] parsed scheme: "" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662114 2738 remote_runtime.go:62] scheme "" not registered, fallback to default scheme >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662152 2738 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock <nil> 0 <nil>}] <nil> <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662167 2738 clientconn.go:948] ClientConn switching balancer to "pick_first" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:57:44.662209 2738 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock". >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662225 2738 remote_image.go:50] parsed scheme: "" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662231 2738 remote_image.go:50] scheme "" not registered, fallback to default scheme >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662241 2738 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock <nil> 0 <nil>}] <nil> <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662247 2738 clientconn.go:948] ClientConn switching balancer to "pick_first" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662276 2738 server.go:1117] Using root directory: /var/lib/kubelet >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662832 2738 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d93ac0, {CONNECTING <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662872 2738 kubelet.go:265] Adding pod path: /etc/kubernetes/manifests >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.662887 2738 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d93ca0, {CONNECTING <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.663151 2738 file.go:68] Watching path "/etc/kubernetes/manifests" >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.663212 2738 kubelet.go:276] Watching apiserver >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.663499 2738 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d93ac0, {READY <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.663513 2738 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000d93ca0, {READY <nil>} >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.663782 2738 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.664033 2738 kubelet.go:453] Kubelet client is not nil >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.665646 2738 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134 >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.666103 2738 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134 >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:44.666646 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:44.666988 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:44.667517 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:44.675299 2738 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.20.0-0.rhaos4.7.git78527db.el8.49, apiVersion: v1alpha1 >Feb 10 10:57:45 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:45.512829 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:45 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:45.572594 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:46 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:46.197883 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:47 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:47.374739 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:48 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:48.193763 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:48 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:48.216988 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:50.930253 2738 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.933096 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.934809 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/vsphere-volume" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.934964 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/aws-ebs" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.935081 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/gce-pd" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.935197 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/cinder" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.935310 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/azure-disk" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.935445 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/azure-file" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.935940 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/empty-dir" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936055 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/git-repo" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936161 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/host-path" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936282 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/nfs" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936380 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/secret" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936504 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/iscsi" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936597 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/glusterfs" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936721 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/rbd" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936820 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/quobyte" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936904 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/cephfs" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.936997 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/downward-api" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937092 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/fc" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937188 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/flocker" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937280 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/configmap" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937369 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/projected" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937531 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/portworx-volume" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937628 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/scaleio" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937718 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/local-volume" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.937807 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/storageos" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.938145 2738 plugins.go:638] Loaded volume plugin "kubernetes.io/csi" >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.939076 2738 server.go:1176] Started kubelet >Feb 10 10:57:50 crc-q4g5s-master-0 systemd[1]: Started Kubernetes Kubelet. >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:50.941747 2738 kubelet.go:1292] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.943152 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.944357 2738 server.go:148] Starting to listen on 0.0.0.0:10250 >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.944847 2738 certificate_manager.go:282] Certificate rotation is enabled. >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.944880 2738 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.945601 2738 certificate_manager.go:556] Certificate expiration is 2021-03-11 16:09:14 +0000 UTC, rotation deadline is 2021-03-06 16:24:21.597682008 +0000 UTC >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.945636 2738 certificate_manager.go:288] Waiting 581h26m30.652049835s for next certificate rotation >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.946173 2738 server.go:410] Adding debug handlers to kubelet server. >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.948676 2738 volume_manager.go:269] The desired_state_of_world populator starts >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.948794 2738 volume_manager.go:271] Starting Kubelet Volume Manager >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.948938 2738 desired_state_of_world_populator.go:142] Desired state populator starts to run >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.949180 2738 reflector.go:219] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134 >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:50.953596 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:50.953483 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:50.953745 2738 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.954886 2738 factory.go:149] Registering CRI-O factory >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.955028 2738 factory.go:55] Registering systemd factory >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.955561 2738 factory.go:101] Registering Raw factory >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.959681 2738 manager.go:1203] Started watching for new ooms in manager >Feb 10 10:57:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:50.960520 2738 manager.go:301] Starting recovery of all containers >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.038290 2738 manager.go:306] Recovery completed >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.048702 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.049241 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.049482 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.049498 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.055151 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.055177 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.055184 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.055218 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.056180 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.058669 2738 kubelet_network_linux.go:56] Initialized IPv4 iptables rules. >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.058698 2738 status_manager.go:158] Starting to sync pod status with apiserver >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.058715 2738 kubelet.go:1834] Starting kubelet main sync loop. >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.058766 2738 kubelet.go:1858] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.059028 2738 reflector.go:219] Starting reflector *v1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:134 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.059659 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.092222 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.092244 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.157290 2738 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.159203 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.256702 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.258309 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.282200 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.282301 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.282330 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.282477 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.285033 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.359373 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.560214 2738 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.685610 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.686670 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.695839 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.695866 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.695874 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.695900 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.696494 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.760286 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:51.820215 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:51.945931 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.049910 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.092696 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:52.284241 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:52.363117 2738 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.496919 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.498027 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.520183 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.520290 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.520319 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.520490 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:52.522476 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:52.560850 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:52.598984 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:52.945983 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:53.049815 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:53.092661 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:53.746270 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:53.762409 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:53.946065 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:53 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:53.966882 2738 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.049890 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.092790 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.122973 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.124084 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.147706 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.147894 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.147954 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.148058 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:54.151256 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:54.161736 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:54.946205 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:55.049843 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:55.092553 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:55 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:55.340268 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:55 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:55.432808 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:55.946129 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:56.049676 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:56.092613 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:56.945744 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.049591 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.092780 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:57.169054 2738 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.351912 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.352989 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:57.362547 2738 kubelet.go:1858] skipping pod synchronization - container runtime status check may not have completed yet >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.376904 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.376993 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.377021 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.377082 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:57.379527 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:57.842941 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:57:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:57.945812 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:58.049592 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:58.092699 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:58.946405 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:59.049633 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:59.092671 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:57:59 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:57:59.237064 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:57:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:57:59.946409 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:00.049827 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:00.092581 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:00.946107 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.049742 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.051103 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:01.051174 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.092523 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.092605 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.092663 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.093818 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.120161 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.120337 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.120376 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.123589 2738 cpu_manager.go:192] [cpumanager] starting with none policy >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.123767 2738 cpu_manager.go:193] [cpumanager] reconciling every 10s >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.123830 2738 state_mem.go:36] [cpumanager] initializing new in-memory state store >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.143165 2738 policy_none.go:43] [cpumanager] none policy: Start >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.143593 2738 container_manager_linux.go:435] Updating kernel flag: kernel/panic, expected value: 10, actual value: 0 >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.143754 2738 container_manager_linux.go:435] Updating kernel flag: vm/overcommit_memory, expected value: 1, actual value: 0 >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.151230 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.151264 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.173605 2738 manager.go:236] Starting Device Plugin manager >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:58:01.173648 2738 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174358 2738 manager.go:278] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock" >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174494 2738 plugin_watcher.go:52] Plugin Watcher Start at /var/lib/kubelet/plugins_registry >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174570 2738 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174576 2738 plugin_manager.go:114] Starting Kubelet Plugin Manager >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174645 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.174654 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.175025 2738 container_manager_linux.go:986] Found 143 PIDs in root, 143 of them are not to be moved >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:01.234365 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:01.448739 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:01.944345 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.151578 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.174867 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.363212 2738 kubelet.go:1920] SyncLoop (ADD, "file"): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e), recycler-pod-crc-q4g5s-master-0_openshift-infra(d63c21a47be0760bb2c10b9bcb04203c), etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24), kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1), kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)" >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.364534 2738 topology_manager.go:187] [topologymanager] Topology Admit Handler >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.364709 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.364735 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.493312 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-resource-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.493357 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-cert-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.593584 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-resource-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.593687 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-cert-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.593702 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-resource-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.593748 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/8dc1b979-dedb-45b6-8487-d5f8ea206a4e-cert-dir") pod "openshift-kube-scheduler-crc-q4g5s-master-0" (UID: "8dc1b979-dedb-45b6-8487-d5f8ea206a4e") >Feb 10 10:58:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:02.946001 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.151552 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.175070 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.365040 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:03.571754 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.779951 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.781689 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.791350 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.791392 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.791401 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.791442 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:03.792180 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:03.945804 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:04.057509 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:04.151667 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:04.175196 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:04.365352 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:04.450097 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:04.946424 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:05.151738 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:05.175088 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:05.365165 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:05.946995 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:06.151767 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:06.175037 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:06.365405 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:06.946795 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:07.151736 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:07.175142 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:07.364828 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:07 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:07.847025 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:07.944254 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:08.151600 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:08.175054 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:08.365026 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:08.946584 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:09.151594 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:09.175364 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:09.364980 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:09.946171 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.151655 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.175062 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:10.195832 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.365258 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:10.574682 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.792597 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.794619 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.807099 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.807158 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.807179 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.807209 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:10.808038 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:10.946137 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.151712 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.151808 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:11.151836 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.175604 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.175758 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:11.175931 2738 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.252254 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.252390 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.365359 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:11.946086 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.252961 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.365214 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.365343 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.365400 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.366246 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:12.375159 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.389143 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.389323 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.389388 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.390133 2738 topology_manager.go:187] [topologymanager] Topology Admit Handler >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.390365 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.390412 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.391229 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.391353 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.537650 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol" (UniqueName: "kubernetes.io/empty-dir/d63c21a47be0760bb2c10b9bcb04203c-vol") pod "recycler-pod-crc-q4g5s-master-0" (UID: "d63c21a47be0760bb2c10b9bcb04203c") >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.639127 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "vol" (UniqueName: "kubernetes.io/empty-dir/d63c21a47be0760bb2c10b9bcb04203c-vol") pod "recycler-pod-crc-q4g5s-master-0" (UID: "d63c21a47be0760bb2c10b9bcb04203c") >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.643927 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "vol" (UniqueName: "kubernetes.io/empty-dir/d63c21a47be0760bb2c10b9bcb04203c-vol") pod "recycler-pod-crc-q4g5s-master-0" (UID: "d63c21a47be0760bb2c10b9bcb04203c") >Feb 10 10:58:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:12.946474 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:13.252912 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:13.391047 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:13.391728 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:13.946497 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:14.253020 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:14.390895 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:14.391669 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:14.944211 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:15.252648 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:15.390763 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:15.391573 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:15.944141 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:16.252594 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:16.390719 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:16.391524 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:16.946609 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.252717 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.390763 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.391535 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:17.574022 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:17.575398 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.808317 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.809408 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.821128 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.821177 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.821188 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.821211 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:17.822103 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:17.849961 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:17.850097 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:17.943966 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:18.252626 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:18.390662 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:18.391655 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:18.946483 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:19.253016 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:19.390777 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:19.391907 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:19.945400 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:20.252879 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:20.391075 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:20.391697 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:20 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:20.626587 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:20.944181 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.176694 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.176818 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.252771 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.252850 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:21.252877 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.353089 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.353191 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.391040 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.391751 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:21.946110 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.177166 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.353836 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391320 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391635 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391702 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391712 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391775 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.391953 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.392875 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.393122 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.402363 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.402406 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.402441 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:58:22.403285 2738 status_manager.go:550] Failed to get status for pod "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410003 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410047 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410061 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410138 2738 topology_manager.go:187] [topologymanager] Topology Admit Handler >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410173 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410183 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410255 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.410282 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.417215 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.417235 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.471055 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "static-pod-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-static-pod-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.471182 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-resource-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.471336 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-cert-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.471590 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "data-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-data-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.471770 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-bin" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-usr-local-bin") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.572576 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "static-pod-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-static-pod-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.572717 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-resource-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.572778 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-cert-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.572837 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "data-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-data-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.572892 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "usr-local-bin" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-usr-local-bin") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.573076 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-resource-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.573092 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "data-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-data-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.573160 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "static-pod-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-static-pod-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.573234 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-cert-dir") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.573293 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "usr-local-bin" (UniqueName: "kubernetes.io/host-path/c0668244-4d54-49e9-89a6-b46188a5ff24-usr-local-bin") pod "etcd-crc-q4g5s-master-0" (UID: "c0668244-4d54-49e9-89a6-b46188a5ff24") >Feb 10 10:58:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:22.945811 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.177601 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.353771 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.410802 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.410803 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.417651 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:23.945840 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.177657 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.353642 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.410625 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.410825 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.417734 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:24.576617 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.822806 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.824207 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.833759 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.833791 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.833800 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.833819 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:24.834619 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:24.946381 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.177605 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.353337 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.410792 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.410863 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.417345 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:25.945621 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.177687 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.353630 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.410739 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.410783 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.417818 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:26.946239 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:27.177656 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:27.353560 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:27.410734 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:27.417749 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:27.851724 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:27.946314 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.177091 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.353632 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.410646 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.410842 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.417758 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:28.944086 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.177505 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.353720 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.410659 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.410664 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.417562 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:29.946221 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.177128 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.353630 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.410553 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.410579 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.417655 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:30.946828 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:31.042487 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.177671 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.177803 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:31.177859 2738 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.353468 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.353507 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:31.353519 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.410741 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.410804 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.417529 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.453631 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.453711 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:31.580137 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.835049 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.836203 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.861740 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.861925 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.861994 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.862076 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:31.864729 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:31.944355 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.410701 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.410839 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.410898 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.410718 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.411789 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.411995 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.412110 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.413010 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.417635 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.417714 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.418962 2738 kuberuntime_manager.go:439] No sandbox for pod "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)" can be found. Need to start a new one >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441253 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441324 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441347 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441597 2738 topology_manager.go:187] [topologymanager] Topology Admit Handler >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441660 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441672 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441843 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.441859 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.445600 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.445661 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.445700 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:58:32.447632 2738 status_manager.go:550] Failed to get status for pod "recycler-pod-crc-q4g5s-master-0_openshift-infra(d63c21a47be0760bb2c10b9bcb04203c)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/recycler-pod-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.453853 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.466189 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.466207 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.519321 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-resource-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.519367 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "audit-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-audit-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.519383 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-cert-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619540 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-resource-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619573 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "audit-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-audit-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619588 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-cert-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619612 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-cert-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619630 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-resource-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.619641 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "audit-dir" (UniqueName: "kubernetes.io/host-path/faf0ab83-ecb9-40ca-b555-321f7fae67b1-audit-dir") pod "kube-apiserver-crc-q4g5s-master-0" (UID: "faf0ab83-ecb9-40ca-b555-321f7fae67b1") >Feb 10 10:58:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:32.944489 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:33.442145 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:33.442172 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:33.454342 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:33.466389 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:33.944236 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:34.442062 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:34.442155 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:34.453891 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:34.466386 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:34.777100 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:34.946278 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:35.442555 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:35.443248 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:35.454231 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:35.466744 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:35.945755 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:36.442133 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:36.442160 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:36.454213 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:36.466714 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:36.946362 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:37.442165 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:37.453997 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:37.466627 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:37 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:37.854365 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:37.946502 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.441816 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.442111 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.454112 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.466828 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:38.583020 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.865292 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.866393 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.877514 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.877547 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.877555 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.877574 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:38.878232 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:38.946632 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:39.442097 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:39.442203 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:39.454068 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:39.466546 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:39.945842 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:40.442078 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:40.442145 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:40.453974 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:40.466658 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:40.945713 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.178662 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.178766 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.442139 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.442188 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.454283 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.454392 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:41.454504 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.466824 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.554716 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.554825 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:41.945982 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.179287 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.442206 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.443030 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.443374 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.442264 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.444156 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.444518 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.444945 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.445911 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.466568 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.466638 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.466837 2738 kuberuntime_manager.go:439] No sandbox for pod "recycler-pod-crc-q4g5s-master-0_openshift-infra(d63c21a47be0760bb2c10b9bcb04203c)" can be found. Need to start a new one >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.469834 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.469957 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.470011 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:58:42.474151 2738 status_manager.go:550] Failed to get status for pod "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/pods/etcd-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479059 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479122 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479140 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479324 2738 topology_manager.go:187] [topologymanager] Topology Admit Handler >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479375 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479387 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479623 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.479654 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.496308 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.496564 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.555008 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.564536 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-resource-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.564608 2738 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-cert-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.665588 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-cert-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.665757 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "cert-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-cert-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.666088 2738 operation_generator.go:672] MountVolume.SetUp succeeded for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-resource-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.666180 2738 reconciler.go:269] operationExecutor.MountVolume started for volume "resource-dir" (UniqueName: "kubernetes.io/host-path/ded7cb3a-50b2-41fb-b781-3f135a987b22-resource-dir") pod "kube-controller-manager-crc-q4g5s-master-0" (UID: "ded7cb3a-50b2-41fb-b781-3f135a987b22") >Feb 10 10:58:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:42.946602 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.179214 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.479938 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.480122 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.497192 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.555130 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:43 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:43.946606 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.179016 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.479811 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.479936 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.497102 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.555032 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:44 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:44.945378 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.179050 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.479682 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.479762 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.497310 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.555071 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:45.586256 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.878777 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.880334 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.894247 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.894287 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.894299 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.894320 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:45.895226 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:45 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:45.946354 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.179219 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.480032 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.480215 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.497267 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.555208 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:46 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:46.945821 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.178905 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.479941 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.480055 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.497224 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.555529 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:47.857790 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:47 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:47.944875 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.179143 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.479605 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.479837 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.496907 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.554975 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:48 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:48.944308 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:49.103099 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.179355 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.479714 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.479798 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.497148 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.555017 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:49 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:49.946373 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.179169 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.479946 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.480663 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.496863 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.555215 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.944137 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.948836 2738 kubelet_getters.go:178] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc-q4g5s-master-0" status=Pending >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.948886 2738 kubelet_getters.go:178] "Pod status updated" pod="openshift-infra/recycler-pod-crc-q4g5s-master-0" status=Pending >Feb 10 10:58:50 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:50.948897 2738 kubelet_getters.go:178] "Pod status updated" pod="openshift-etcd/etcd-crc-q4g5s-master-0" status=Pending >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.178978 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.179746 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:51.180097 2738 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.479677 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.479719 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.496852 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.555575 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.555701 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:51.555733 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.656157 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.656879 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:51 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:51.945749 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.479847 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.479991 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.480043 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.479849 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.480890 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.481008 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.481702 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.482318 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.492635 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.492806 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.492925 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.493635 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.493715 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.493735 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:58:52.494661 2738 status_manager.go:550] Failed to get status for pod "kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.496720 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.496851 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.496723 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497027 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497210 2738 kuberuntime_manager.go:439] No sandbox for pod "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)" can be found. Need to start a new one >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.496669 2738 kubelet.go:1958] SyncLoop (PLEG): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)", event: &pleg.PodLifecycleEvent{ID:"8dc1b979-dedb-45b6-8487-d5f8ea206a4e", Type:"ContainerDied", Data:"9711770bb5ac346f7c7a9f82430f4dfb963694774bcbddc44f66c3e5b3858b9c"} >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497707 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497723 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497805 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.497817 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.508514 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.508554 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:52.587197 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.657325 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.895895 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.896202 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.901663 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.901697 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.901704 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.901727 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:52.902534 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:52 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:52.946305 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.497748 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.497882 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.497961 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.508853 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.657739 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:53 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:53.946035 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.497331 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.497955 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.498021 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.508921 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.657793 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:54 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:54.946898 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.497638 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.497965 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.497986 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.508679 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.657836 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:55 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:55.944890 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.497414 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.498898 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.498982 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.508947 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.657719 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:56 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:56.945926 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.497483 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.498386 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.499760 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.508697 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.657367 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:57.860726 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:58:57 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:57.945909 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.497157 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.498086 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.498197 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.508791 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.657664 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:58 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:58.946256 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.497772 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.497882 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.497921 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.509002 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:59.588226 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.657585 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.902862 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.903403 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.921483 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.921543 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.921569 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.921591 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:58:59.922329 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:58:59 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:58:59.944499 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:00.390609 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.497122 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.498104 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.498283 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.508793 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.657369 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:00 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:00.943888 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.179148 2738 container_manager_linux.go:986] Found 143 PIDs in root, 143 of them are not to be moved >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.180919 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.180998 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.497616 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.497911 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.497953 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.509007 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.657816 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.657919 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:01.657948 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.758308 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.758515 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:01.901950 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:01 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:01.946342 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.181212 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.497602 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.497700 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.497746 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498102 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498164 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498175 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498290 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498360 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498201 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.498683 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.499485 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.500828 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.516632 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.516717 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.517067 2738 kuberuntime_manager.go:439] No sandbox for pod "kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1)" can be found. Need to start a new one >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534828 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534858 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534867 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534909 2738 kubelet.go:1958] SyncLoop (PLEG): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)", event: &pleg.PodLifecycleEvent{ID:"8dc1b979-dedb-45b6-8487-d5f8ea206a4e", Type:"ContainerStarted", Data:"71bd984f96e8951c117edd7c9f23fc40147a7c937b45c1dc2fd7b290125e19f2"} >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534935 2738 kubelet.go:1958] SyncLoop (PLEG): "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)", event: &pleg.PodLifecycleEvent{ID:"c0668244-4d54-49e9-89a6-b46188a5ff24", Type:"ContainerDied", Data:"d51d0445d531dcc2e4fbbc3f06f939185633dfccd000a6d18632ff535876903c"} >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534987 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.534995 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.535045 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.535054 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.535060 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.535765 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.535788 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:02.536521 2738 status_manager.go:550] Failed to get status for pod "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.542309 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.542342 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.542349 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.542934 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.542955 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:02.543499 2738 status_manager.go:550] Failed to get status for pod "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.552862 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.552882 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:02.627158 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.758711 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:02 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:02.944293 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.181365 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.535158 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.536838 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.543053 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.553023 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.759169 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:03 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:03.943972 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.181609 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.535491 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.536789 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.543222 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.553282 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.758895 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:04 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:04.947505 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.181477 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.535628 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.535967 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.543421 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.553278 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.759143 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:05 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:05.947835 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.181531 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.535210 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.535941 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.543518 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.553293 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:06.567290 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:06.592137 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.759043 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.922643 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.923402 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.946418 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.947347 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.947510 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.947618 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:06.947721 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:06 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:06.950119 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.181666 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.535393 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.536100 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.543326 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.553260 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.758743 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:07.861891 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:59:07 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:07.944222 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.181232 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.535235 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.536225 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.543588 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.553593 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.759109 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:08 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:08.946160 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.181615 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.535651 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.536053 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.543498 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.553386 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.759051 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:09 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:09.946215 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.181645 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.535613 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.536084 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.543492 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.553169 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.759403 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:10 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:10.945690 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.181626 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.181761 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:11.181842 2738 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.535597 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.536077 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.543184 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.553353 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.759267 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.759380 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:11.759419 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.859815 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.859928 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:11 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:11.946728 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.535300 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.535392 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.535492 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.536084 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.536138 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.536168 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.536872 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.537699 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.543124 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.543264 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.546516 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.546697 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.546782 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.547064 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.547145 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:12.548035 2738 status_manager.go:550] Failed to get status for pod "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/pods/etcd-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549052 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549085 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549098 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549144 2738 kubelet.go:1958] SyncLoop (PLEG): "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)", event: &pleg.PodLifecycleEvent{ID:"c0668244-4d54-49e9-89a6-b46188a5ff24", Type:"ContainerStarted", Data:"f7b2000ad7387172401154e6c2496b76dd53ee2efa3749916170f92b65088963"} >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549189 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1)", event: &pleg.PodLifecycleEvent{ID:"faf0ab83-ecb9-40ca-b555-321f7fae67b1", Type:"ContainerDied", Data:"624cda06dc662202ae255c2f11b5c4fe38222d6ca2101e5bb0209563aad940eb"} >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549275 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549292 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549393 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.549414 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.553004 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.553029 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.553159 2738 kuberuntime_manager.go:439] No sandbox for pod "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)" can be found. Need to start a new one >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.860163 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:12 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:12.953448 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.309851 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.310023 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.547321 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.549382 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.549524 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:13.593012 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.860175 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.944331 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.950554 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.950866 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.959176 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.959221 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.959232 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:13.959253 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:13 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:13.960145 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.310381 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.547505 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.549391 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.549561 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.860187 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:14 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:14.944021 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.310511 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.547820 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.549584 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.549584 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.860468 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:15 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:15.944256 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.310662 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.547340 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.549362 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.549616 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.860124 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:16 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:16.944906 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.310490 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.547519 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.549480 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.549493 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.860369 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:17.863265 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:59:17 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:17.944562 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.310352 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.547492 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.549396 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.549642 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.860043 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:18 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:18.944052 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.310609 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.547940 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.549496 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.549614 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.860276 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:19 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:19.944201 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:20.116084 2738 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.310388 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.547685 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.549512 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.549712 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:20.596117 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.860642 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.946241 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.960728 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.961787 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.992149 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.992936 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.993485 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:20.994069 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:20 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:20.997254 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.182606 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.182773 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.310942 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.547590 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.549387 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.549604 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.860129 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.860346 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:21.860378 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.947065 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.960943 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:21 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:21.961028 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.182945 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.310623 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.547934 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.548624 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549488 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549603 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549650 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549709 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549744 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.549668 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.551239 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.551891 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.581322 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.581373 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.581387 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.581755 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.581789 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:22.582823 2738 status_manager.go:550] Failed to get status for pod "kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587208 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587250 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587260 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587314 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-apiserver-crc-q4g5s-master-0_openshift-kube-apiserver(faf0ab83-ecb9-40ca-b555-321f7fae67b1)", event: &pleg.PodLifecycleEvent{ID:"faf0ab83-ecb9-40ca-b555-321f7fae67b1", Type:"ContainerStarted", Data:"904e2970231bc7b42e1665517d42455db7bcca83a84388b04f7a724d1cabf4f4"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587353 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)", event: &pleg.PodLifecycleEvent{ID:"ded7cb3a-50b2-41fb-b781-3f135a987b22", Type:"ContainerStarted", Data:"887c1408fa8675f4261340f830cbb4eb8e56439c0c542f66b7bc5ae127254f65"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587404 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)", event: &pleg.PodLifecycleEvent{ID:"ded7cb3a-50b2-41fb-b781-3f135a987b22", Type:"ContainerStarted", Data:"517f6dc0f16fce9a5024bf38d69492e04a6b6649cdbef65e04418b8ed10d890c"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587473 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)", event: &pleg.PodLifecycleEvent{ID:"ded7cb3a-50b2-41fb-b781-3f135a987b22", Type:"ContainerStarted", Data:"373879f869ba75ca1a728aa37dde24f4e7eb2aa9e6d9a9df5e35493de25b8641"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587500 2738 kubelet.go:1958] SyncLoop (PLEG): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)", event: &pleg.PodLifecycleEvent{ID:"8dc1b979-dedb-45b6-8487-d5f8ea206a4e", Type:"ContainerStarted", Data:"bbbaf70c7bb6d9665f54c9912c7c696d093c7f439a4a34196957d5a34d1d9605"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587537 2738 kubelet.go:1958] SyncLoop (PLEG): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)", event: &pleg.PodLifecycleEvent{ID:"8dc1b979-dedb-45b6-8487-d5f8ea206a4e", Type:"ContainerStarted", Data:"fe7ddd0ca86d606218ed7cf4104d9d95ef6c40fe73e5331fe3be6bb05582e6a8"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587560 2738 kubelet.go:1958] SyncLoop (PLEG): "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)", event: &pleg.PodLifecycleEvent{ID:"8dc1b979-dedb-45b6-8487-d5f8ea206a4e", Type:"ContainerStarted", Data:"19ef2123c6106756194a1468dfa7e43ef848d641f222ed8b5d87923973f9a1ba"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587583 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)", event: &pleg.PodLifecycleEvent{ID:"ded7cb3a-50b2-41fb-b781-3f135a987b22", Type:"ContainerStarted", Data:"29cc042d53753e0d039f40c715a2e5f2d7ae3491b01e1e7cd5e8bb2c5c709124"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587603 2738 kubelet.go:1958] SyncLoop (PLEG): "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)", event: &pleg.PodLifecycleEvent{ID:"ded7cb3a-50b2-41fb-b781-3f135a987b22", Type:"ContainerStarted", Data:"4e462c93297d65a102730937ada43c4ea93cf819412b732c444d664bb3668a6b"} >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587702 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.587714 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.946289 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:22 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:22.961723 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.183253 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.310689 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.310833 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.310879 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.311947 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.335233 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.335350 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.335380 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.336089 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.336144 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:23.338628 2738 status_manager.go:550] Failed to get status for pod "openshift-kube-scheduler-crc-q4g5s-master-0_openshift-kube-scheduler(8dc1b979-dedb-45b6-8487-d5f8ea206a4e)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.356852 2738 kubelet.go:1958] SyncLoop (PLEG): "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)", event: &pleg.PodLifecycleEvent{ID:"c0668244-4d54-49e9-89a6-b46188a5ff24", Type:"ContainerDied", Data:"b358d14ac1ed7acdf882339a3a80b980b1adf8e8284667f782baf573c4e2918a"} >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.357088 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.357117 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.357772 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.357834 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.582185 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.588040 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.947409 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:23 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:23.961261 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.183407 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.336501 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.357223 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.357936 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.582330 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.588093 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.944980 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:24 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:24.961355 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.183386 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.336630 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.358059 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.358103 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.582402 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.588133 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.945993 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:25 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:25.961672 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.183395 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.336668 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.357670 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.358209 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.582552 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.588082 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.943986 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:26 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:26.961482 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.183496 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.336693 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.357730 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.358123 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.582087 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.588045 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:27.597816 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:27.864169 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.944175 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.961319 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.997996 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:27 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:27.998416 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.005163 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.005206 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.005229 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.005255 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:28.005995 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.183360 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.336542 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.357829 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.358161 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.582067 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.588162 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.946076 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:28 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:28.961612 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.183387 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.336796 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.357899 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.358051 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.582329 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.588098 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.945132 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:29 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:29.961330 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.183098 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.336870 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.358024 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.358301 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.582561 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.587860 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.946718 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:30 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:30.961623 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.183127 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.183235 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:31.183293 2738 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.336704 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.358053 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.358077 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.582078 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.588097 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.946292 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.961598 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:31.961680 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:31 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:31.961708 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.062065 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.062162 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.336763 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.357734 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.358189 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.582370 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.582649 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.588392 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.588553 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.588603 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.589419 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.591211 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.591716 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.615893 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.616416 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.616984 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.619347 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.620645 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:32.620824 2738 status_manager.go:550] Failed to get status for pod "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:32 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:32.944778 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.062811 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.336683 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.336781 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.357704 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.357791 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.357834 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.358122 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.358219 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.358257 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.358961 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.359315 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.383102 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.383207 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.383240 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.384619 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.384725 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.384750 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.385318 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.385381 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:33.387032 2738 status_manager.go:550] Failed to get status for pod "etcd-crc-q4g5s-master-0_openshift-etcd(c0668244-4d54-49e9-89a6-b46188a5ff24)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd/pods/etcd-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.402317 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.402403 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.592642 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.621331 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:33 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:33.944825 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.062846 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.385764 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:34.401682 2738 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc-q4g5s-master-0&limit=500&resourceVersion=0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.402624 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.592645 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:34.600750 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.621702 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:34 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:34.945815 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.006155 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.007232 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.031255 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.031372 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.031404 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.031569 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:35.033765 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.062714 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.385927 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.402778 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.592681 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.621590 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:35 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:35.946153 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.062776 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.385941 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.402922 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.592788 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.621483 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:36 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:36.946071 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.062781 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.385935 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.402899 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.592742 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.621370 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:37.866821 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:37.867041 2738 event.go:218] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba38207a02", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebb7f82e02, ext:11437988210, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}' (retry limit exceeded!) >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:37.869795 2738 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"crc-q4g5s-master-0.16625dba3f0cef8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"crc-q4g5s-master-0", UID:"crc-q4g5s-master-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node crc-q4g5s-master-0 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"crc-q4g5s-master-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebc349d98d, ext:11554148058, loc:(*time.Location)(0x7321620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0010cebc349d98d, ext:11554148058, loc:(*time.Location)(0x7321620)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://api-int.crc.testing:6443/api/v1/namespaces/default/events": dial tcp 192.168.130.11:6443: connect: connection refused'(may retry after sleeping) >Feb 10 10:59:37 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:37.945625 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:38.055771 2738 status_manager.go:550] Failed to get status for pod "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.062705 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.385669 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.402821 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.592255 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.621790 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:38 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:38.945272 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.062767 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.385970 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.403025 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.592813 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.621610 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:39 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:39.947293 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.062834 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.386072 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.403028 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.592916 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.621590 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: W0210 10:59:40.827925 2738 status_manager.go:550] Failed to get status for pod "kube-controller-manager-crc-q4g5s-master-0_openshift-kube-controller-manager(ded7cb3a-50b2-41fb-b781-3f135a987b22)": Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:40 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:40.945603 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.062774 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.184020 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.184892 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.385720 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.402783 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.592957 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:41.603561 2738 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc-q4g5s-master-0?timeout=10s": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.621614 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:41 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:41.944100 2738 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc-q4g5s-master-0": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.034472 2738 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.035019 2738 setters.go:86] Using node IP: "192.168.126.11" >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.044416 2738 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node crc-q4g5s-master-0 >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.044475 2738 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node crc-q4g5s-master-0 >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.044500 2738 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node crc-q4g5s-master-0 >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.044525 2738 kubelet_node_status.go:71] Attempting to register node crc-q4g5s-master-0 >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:42.045284 2738 kubelet_node_status.go:93] Unable to register node "crc-q4g5s-master-0" with API server: Post "https://api-int.crc.testing:6443/api/v1/nodes": dial tcp 192.168.130.11:6443: connect: connection refused >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.062345 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.062480 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: E0210 10:59:42.062547 2738 kubelet.go:2269] nodes have not yet been read at least once, cannot construct node object >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.162967 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.163051 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.185972 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.385882 2738 kubelet.go:449] kubelet nodes not sync >Feb 10 10:59:42 crc-q4g5s-master-0 hyperkube[2738]: I0210 10:59:42.402633 2738 kubelet.go:449] kubelet nodes not sync >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1927263
:
1756195
| 1756196