Bug 1779513
Summary: | kubemacpool blocks new VMs after OpenShift CA rotation | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Guohua Ouyang <gouyang> | ||||||
Component: | Networking | Assignee: | Alona Kaplan <alkaplan> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Yan Du <yadu> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | urgent | ||||||||
Version: | 2.2.0 | CC: | alkaplan, aos-bugs, atragler, cnv-qe-bugs, danken, gouyang, myakove, ncredi, oshoval, phoracek, rhrazdil, rmohr, ycui | ||||||
Target Milestone: | --- | Flags: | phoracek:
needinfo-
|
||||||
Target Release: | 2.2.0 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | kubemacpool-container-v2.2.0-6 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2020-01-30 16:27:33 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Guohua Ouyang
2019-12-04 06:14:07 UTC
This should be a deployment issue, please help move the bug to other component if it's not owned by console. log snippet: $ oc logs kubemacpool-mac-controller-manager-74b986d899-qd5gf 2019/12/04 05:23:19 http: TLS handshake error from 10.128.0.1:33926: remote error: tls: bad certificate 2019/12/04 05:23:29 http: TLS handshake error from 10.128.0.1:34008: remote error: tls: bad certificate 2019/12/04 05:25:27 http: TLS handshake error from 10.128.0.1:34972: remote error: tls: bad certificate 2019/12/04 05:25:27 http: TLS handshake error from 10.128.0.1:34974: remote error: tls: bad certificate {"level":"error","ts":1575437127.5803857,"logger":"VirtualMachine Controller","msg":"failed to update the VM with the new finalizer","virtualMachineName":"test1","virtualMachineNamespace":"default","error":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"test1\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).addFinalizerAndUpdate\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:124\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:101\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437127.5806603,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"virtualmachine-controller","request":"default/test1","error":"Operation cannot be fulfilled on virtualmachines.kubevirt.io \"test1\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} 2019/12/04 05:25:28 http: TLS handshake error from 10.128.0.1:34984: remote error: tls: bad certificate 2019/12/04 05:25:28 http: TLS handshake error from 10.128.0.1:34986: remote error: tls: bad certificate 2019/12/04 05:27:10 http: TLS handshake error from 10.130.0.1:45270: remote error: tls: bad certificate {"level":"error","ts":1575437286.9662242,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-bplr7","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437295.6943157,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-4jvh2","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437378.6708696,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-dchnt","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437436.969411,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-fj5qz","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437501.8466725,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-g7j8q","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437551.8379123,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-q2tn6","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437600.6893876,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-v7j8h","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437619.843133,"logger":"PoolManager","msg":"pod not found in the map","podName":"openshift-marketplace/redhat-local-storage-src-667c7d5dd6-g9xgf","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437631.8584375,"logger":"PoolManager","msg":"pod not found in the map","podName":"openshift-marketplace/hco-csc-rh-verified-operators-6c65c4579d-kb9p6","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437633.4804466,"logger":"PoolManager","msg":"pod not found in the map","podName":"openshift-marketplace/rh-verified-operators-bd48ff666-x6qs6","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1575437657.7161055,"logger":"PoolManager","msg":"pod not found in the map","podName":"default/virt-launcher-test1-jpjft","error":"not found","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).ReleasePodMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/pod_pool.go:113\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod.(*ReconcilePolicy).Reconcile\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/controller/pod/pod_controller.go:82\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"info","ts":1575437697.035956,"logger":"PoolManager","msg":"the configMap is empty","configMapName":"kubemacpool-vm-configmap"} {"level":"info","ts":1575437698.809346,"logger":"PoolManager","msg":"the configMap is empty","configMapName":"kubemacpool-vm-configmap"} Since it seems like an issue with kubemacpool-mac-controller-manager, moving to networking for further investigation. @Guohua, was this the only issue with certificates you saw during your deployment? I'd like to make sure this is not an environment issue before I go further into investigation. Also, would you please provide steps to reproduce this? The steps are to create a VM from the wizard, after click "Create the Virtual Machine" button at the last wizard page, if the error in bug https://bugzilla.redhat.com/show_bug.cgi?id=1779504 does not show, then this error shows. I'm not seeing this during deployment, I just see the error on UI and try to find the useful log snippet in c#2. I cannot reproduce the issue in the same environment today, the kubemacpool-mac-controller-manager pod is re-created automatically and it does not show the "remote error: tls: bad certificate". [cnv-qe-jenkins@cnv-executor-gouyang ~]$ oc logs -n openshift-cnv kubemacpool-mac-controller-manager-74b986d899-n2s4r | grep certificate [cnv-qe-jenkins@cnv-executor-gouyang ~]$ oc logs -n openshift-cnv kubemacpool-mac-controller-manager-74b986d899-n2s4r | grep tls [cnv-qe-jenkins@cnv-executor-gouyang ~]$ oc logs -n openshift-cnv kubemacpool-mac-controller-manager-74b986d899-n2s4r | grep bad I feel very weird about this, but the issue does exist once. Meni, please close this if you don't see this issue on your clusters. I don't see this issue on the latest deployment env, feel free to close this. Created attachment 1643497 [details]
error.png
I said it too early, the error is appearing suddenly after half day.
Could you give us access to the cluster after it reproduces? The issue is happening and it's gone suddenly, not sure when it happens again. If it's happening again, I will try to ping you ASAP. Guohua, note that it would be valuable for us to get access to the cluster that experienced the issue even after you don't see it anymore in the UI. It may be just that there are issues with certificates on this very cluster. Guohua, could you try it with the latest OCP? If you reproduce it again, I would revert kubemacpool failing policy back to 'admissionregistrationv1beta1.Ignore' (changed in https://github.com/k8snetworkplumbingwg/kubemacpool/commit/02a7388b7c98336674f7425aab30686e69536966), that should be enough to fix it. But again, let's first verify that it was not an issue of the environment. It's not happened to just one cluster, I saw the issue on a new cluster actually. The version is 4.3.0-0.nightly-2019-12-12-021332, does it meet the latest OCP? I just see this when run below commands in a row, it looks the issue happens in a certain period and it's gone when the kubemacpool pod is re-created. $ oc process -f vm-template-fedora.yaml -p NAME=fedora -p CPU_CORES=2 -p MEMORY=2Gi | oc create -f - virtualmachine.kubevirt.io/fedora created $ virtctl start fedora Error starting VirtualMachine an error on the server ("Internal error occurred: failed calling webhook \"mutatevirtualmachines.example.com\": Post https://kubemacpool-service.openshift-cnv.svc:443/mutate-virtualmachines?timeout=30s: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"webhook-cert-ca\"): {\"spec\":{\"running\": true}}") has prevented the request from succeeding $ virtctl start fedora VM fedora was scheduled to start I have just reproduced this bug when I tried to creat a VM via YAML page, using the example VM provided in the UI: apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: example namespace: default spec: running: false template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} resources: requests: memory: 64M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/cirros-registry-disk-demo - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4= With this error: Error "failed calling webhook "mutatevirtualmachines.example.com": Post https://kubemacpool-service.openshift-cnv.svc:443/mutate-virtualmachines?timeout=30s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "webhook-cert-ca")" for field "undefined". But what is very strange is that it seems like timing dependent issue. I tried several times to create the examaple VM I provided above and the creation sometimes fails and sometimes it works... What is clear though is that this is not a single environment issue. Verified with kubemacpool-mac-controller-manager-5bff984648-5wg7p registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubemacpool:v2.2.0-4 Tried to create a VM numerous times and haven't seen the issue. Thanks. Thanks, both of you! :) The issue is still existing in registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubemacpool:v2.2.0-4. In c#17, the issue happens in a certain period and it's gone when the kubemacpool pod is re-created. There is a chance that this happens because of openshifts internal certificate rotation. OpenShift also rotates the CA of the webhook. Every component which is listening for webhook calls from the apiserver has to deal with the scenario where the apiserver ca certificate is rotated. We are doing https://github.com/kubevirt/kubevirt/pull/2234 in kubevirt. Maybe that is also what kube-macpool has to do. That it starts working again after the pods are restarted makes sense. Also that it "sometimes" works and "sometimes not" by subsequent calls is a hint that there are two pods behind the service. One of them got restarted recently, so it fetched the new CA. The older pod still only accepts the older certificate and therefore refuses the webhook requests. vm mutatingwebhook has this issue now looks like, pod mutatingwebhook was changed to Ignore so far , but vm doesnt Thanks Roman, that's really helpful. We are planning to just set failure policy to 'Ignore' to avoid this issue in 2.2. Although the mac allocation is important, it is not that critical. If it does not allocate a MAC from the pool, but a random one, we should still be quite safe. For the next release, we plan to switch to certificates managed externally, we would handle the rotations as a part of it. Petr, just to be sure that you understand the impact: This means that kubemacpool will sooner or later fail on every deployment. External certificate management will not help. It is an issue within the kubemacpool binary itself. It needs to watch for openshift certificate rotation and update internally to the new CA. based on Roman's comment, Im adding Blocker? flag Nelly, this indeed should be a blocker. Roman, with 'Ignore' it would fail silently, without breaking the VM CREATE operation. That will result into a VM with random MAC (not allocated from the pool) which we can live with. With cert-manager, we will change the code of KMP, so it re-reads the secret when it changes and make sure we respect the manager. Now we only create a self-signed cert internally and never touch it anymore. So to reiterate this: The issue is not the self-signed certificate. The issue is that openshift renews its webhook CA. So KMP things that the client certificate from the apiserver is invalid (signed by the new CA, while KMP still only accepts the old CA). Another cert-rotate mechanism for CNV will not fix that. Since we expect from VMs that they keep their MAC address between restarts (to not confuse network-manager for instace), and we would not set MAC addresses when the VM gets created, this sounds like a blocker to me. Forgot to add: I don't think that setting to `Ignore` is a good enough workaround. This bug is not new and not a regression. We released cnv-2.1 with it, and we may have to release cnv-2.2 with it again. The mac pool is important, but I do not think that not having it is a blocker. I modify the title of the bug to reflect our current understanding of it. Even then, it is still a blocker. We just need to do a simple "solution" with Ignore instead of proper CA rotation handling. I'm adding back the blocker flag. We should have a patch for Ignore available soon. Can not reproduce the issue anymore. And no certificate related error found in kubemacpool-mac-controller pods. Client Version: 4.3.0-0.nightly-2020-01-14-043441 Server Version: 4.3.0-0.nightly-2020-01-14-043441 Kubernetes Version: v1.16.2 container-native-virtualization-kubemacpool:v2.2.0-7 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:0307 |