Bug 2033252
Summary: | nncp changing it's status between "ConfigurationProgressing" to "SuccessfullyConfigured" every few minutes | |||
---|---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | nijin ashok <nashok> | |
Component: | Networking | Assignee: | Radim Hrazdil <rhrazdil> | |
Status: | CLOSED ERRATA | QA Contact: | Meni Yakove <myakove> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 4.8.3 | CC: | cnv-qe-bugs, phoracek, rhrazdil, rnetser, sgott, shaselde | |
Target Milestone: | --- | |||
Target Release: | 4.10.0 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | kubernetes-nmstate-handler v4.10.0-47 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2042847 (view as bug list) | Environment: | ||
Last Closed: | 2022-03-16 16:05:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2042847 |
Description
nijin ashok
2021-12-16 11:08:52 UTC
After some debugging, I suspect that there may be a bug in k8s controller-runtime. I've opened an issue on controller-runtime GH: https://github.com/kubernetes-sigs/controller-runtime/issues/1764 Since the reconcile trigger seems to bypass our filters, I don't see a way to workaround this issue on our side, as we can't tell, what is the origin of the reconcile request. *** Bug 2037240 has been marked as a duplicate of this bug. *** Failed QE with nmstate-handler version is: v4.10.0-45 nmstate-handler version is: v4.10.0-45 $ oc get nnce -w NAME STATUS c01-rn-410-7-wnjbz-master-0.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-master-1.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-master-2.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-worker-0-5ctss.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-worker-0-dsjq2.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-worker-0-jp8t7.c01-rn-410-7-wnjbz-worker-0-5ctss Available c01-rn-410-7-wnjbz-worker-0-5ctss.c01-rn-410-7-wnjbz-worker-0-5ctss c01-rn-410-7-wnjbz-worker-0-5ctss.c01-rn-410-7-wnjbz-worker-0-5ctss c01-rn-410-7-wnjbz-worker-0-5ctss.c01-rn-410-7-wnjbz-worker-0-5ctss Progressing c01-rn-410-7-wnjbz-worker-0-5ctss.c01-rn-410-7-wnjbz-worker-0-5ctss Available My bad, I moved it ON_QA preemptively. The patch did not get from M/S to D/S due to a CI failure. It should be resolved now. I will move this back ON_QA once the new build appears in errata. Verified with nmstate-handler version is: v4.10.0-47 using: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <node name> spec: desiredState: interfaces: - bridge: options: stp: enabled: false port: - name: ens9 ipv4: auto-dns: true dhcp: false enabled: false ipv6: auto-dns: true autoconf: false dhcp: false enabled: false name: br1test state: up type: linux-bridge nodeSelector: kubernetes.io/hostname: <node name> Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0947 |