Bug 2002461
Summary: | DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Miciah Dashiel Butler Masters <mmasters> |
Component: | Networking | Assignee: | Miciah Dashiel Butler Masters <mmasters> |
Networking sub component: | DNS | QA Contact: | jechen <jechen> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | high | CC: | aos-bugs, jechen |
Version: | 4.9 | ||
Target Milestone: | --- | ||
Target Release: | 4.10.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: When the DNS operator reconciles its operands, the operator gets the cluster DNS service object from the API to determine whether the operator needs to create or update the service. If the service already exists, the operator compares it with what the operator expects to get in order to determine whether an update is needed. Kubernetes 1.22, on which OpenShift 4.9 is based, introduced a new spec.internalTrafficPolicy API field for services. The operator leaves this field empty when it creates the service, but the API sets a default value for this field. The operator was observing this default value and trying to update the field back to the empty value.
Consequence: The operator's update logic would keep trying to revert the default value that the API set for the service's internal traffic policy.
Fix: When comparing services to determine whether an update is required, the operator now treats the empty value and default value for spec.internalTrafficPolicy as equal.
Result: The operator no longer spuriously tries to update the cluster DNS service when the API sets a default value for the service's spec.internalTrafficPolicy field.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-03-10 16:08:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2002621 |
Description
Miciah Dashiel Butler Masters
2021-09-08 21:49:44 UTC
Verified in 4.10.0-0.nightly-2021-09-10-083647 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2021-09-10-083647 True False 8m46s Cluster version is 4.10.0-0.nightly-2021-09-10-083647 after delete dns-operator pod $ oc -n openshift-dns-operator delete pods -l name=dns-operator pod "dns-operator-598b8b6cc7-vt58d" deleted dns-operator pod was recreated $ oc -n openshift-dns-operator get pod NAME READY STATUS RESTARTS AGE dns-operator-598b8b6cc7-xnvc2 2/2 Running 0 19m #check log again $ oc -n openshift-dns-operator logs -c dns-operator deploy/dns-operator I0913 13:07:22.257263 1 request.go:668] Waited for 1.030837528s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/quota.openshift.io/v1?timeout=32s time="2021-09-13T13:07:23Z" level=info msg="reconciling request: /default" time="2021-09-13T13:07:23Z" level=info msg="reconciling request: /default" donot see "updated dns service" after 19 minutes, issue is fixed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056 This comment was flagged a spam, view the edit history to see the original text if required. |