Bug 2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins
Summary: Invalid memory address error if non existing caBundle is configured in DNS-ov...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.11
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.11.0
Assignee: Chad Scribner
QA Contact: Melvin Joseph
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-01 17:31 UTC by Melvin Joseph
Modified: 2022-08-10 11:16 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:15:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-dns-operator pull 321 0 None Draft Bug 2092509: Append an error when a non-existent configmap is configured for DNS-over-TLS 2022-06-02 18:19:15 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 11:16:04 UTC

Description Melvin Joseph 2022-06-01 17:31:23 UTC
Description of problem:

There is invalid memory address error when trying to configure non exiting CaBundle name in tls configuration for dns over TLS.

I was testing the DNS over TLS configurations and when i by mistake configure a non existing CA bundle, it thrown out a CrashLoopBackOff error for openshift-dns-operator  pod.

OpenShift release version:
OCP 4.11

Cluster Platform:
All platforms

How reproducible:


Steps to Reproduce (in detail):
1.oc create -f centos-pod.yaml
% cat centos-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: centos-pod
  name: centos-pod
spec:
  containers:
  - image: quay.io/hongli/centos-netools
    name: centos-pod
2.oc create -f dns-server.yaml
% cat dns-server.yaml          
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: dnssrv-pod
  name: dnssrv-pod
spec:
  containers:
  - image: quay.io/shudili/ubuntu-dns-srv:v1.0
    name: dnssrv-pod
3.oc edit dns.operator and add below config
spec:
  logLevel: Normal
  nodePlacement: {}
  operatorLogLevel: Normal
  servers:
  - forwardPlugin:
      policy: Random
      transportConfig:
        tls:
          caBundle:
            name: non existing-ca-bundle-config
          serverName: dns.mytest.ocp
        transport: TLS
      upstreams:
      - <dns-server ip>
    name: test
    zones:
    - mytest.ocp
dns.operator.openshift.io/default edited
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod
NAME                           READY   STATUS   RESTARTS     AGE
dns-operator-d5d68465f-6qg5x   1/2     Error    1 (8s ago)   103m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod
NAME                           READY   STATUS   RESTARTS      AGE
dns-operator-d5d68465f-6qg5x   1/2     Error    1 (11s ago)   103m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator logs dns-operator-d5d68465f-6qg5x  -c dns-operator
I0601 17:19:14.490401       1 request.go:665] Waited for 1.024152494s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/autoscaling/v1?timeout=32s
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod                                           
NAME                           READY   STATUS   RESTARTS      AGE
dns-operator-d5d68465f-6qg5x   1/2     Error    2 (18s ago)   103m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod                                           
NAME                           READY   STATUS             RESTARTS      AGE
dns-operator-d5d68465f-6qg5x   1/2     CrashLoopBackOff   3 (35s ago)   104m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator logs dns-operator-d5d68465f-6qg5x  -c dns-operator
I0601 17:19:14.490401       1 request.go:665] Waited for 1.024152494s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/autoscaling/v1?timeout=32s
time="2022-06-01T17:19:16Z" level=info msg="reconciling request: /default"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x12e04c5]

goroutine 563 [running]:
github.com/openshift/cluster-dns-operator/pkg/operator/controller.(*reconciler).ensureCABundleConfigMaps(0xc0000c58f0, 0xc0005ce000)
	/dns-operator/pkg/operator/controller/controller_cabundle_configmap.go:71 +0xae5
github.com/openshift/cluster-dns-operator/pkg/operator/controller.(*reconciler).ensureDNS(0xc0000c58f0?, 0xc0005ce000)
	/dns-operator/pkg/operator/controller/controller.go:403 +0xba
github.com/openshift/cluster-dns-operator/pkg/operator/controller.(*reconciler).Reconcile(0xc0000c58f0, {0x1858ec8, 0xc0003d8db0}, {{{0x0?, 0x154f5c0?}, {0xc000042b16?, 0x30?}}})
	/dns-operator/pkg/operator/controller/controller.go:191 +0xb51
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc00014a0b0, {0x1858ec8, 0xc0003d8d50}, {{{0x0?, 0x154f5c0?}, {0xc000042b16?, 0x4041f4?}}})
	/dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114 +0x27e
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00014a0b0, {0x1858e20, 0xc0006c04c0}, {0x149edc0?, 0xc00034e8a0?})
	/dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311 +0x349
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00014a0b0, {0x1858e20, 0xc0006c04c0})
	/dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
	/dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
	/dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:223 +0x31c
melvinjoseph@mjoseph-mac Downloads % 

Actual results:


Expected results:
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod
NAME                           READY   STATUS    RESTARTS   AGE
dns-operator-d5d68465f-6qg5x   2/2     Running   0          100m
No errors and if non existing Ca bundle is configured, it should given a warning.

Impact of the problem:


Additional info:



** Please do not disregard the report template; filling the template out as much as possible will allow us to help you. Please consider attaching a must-gather archive (via `oc adm must-gather`). Please review must-gather contents for sensitive information before attaching any must-gathers to a bugzilla report.  You may also mark the bug private if you wish.

Comment 1 Miciah Dashiel Butler Masters 2022-06-02 15:45:34 UTC
ensureCABundleConfigMaps needs to check for haveSource == nil here: https://github.com/openshift/cluster-dns-operator/blob/d50df32df68f53c1d47db8f5e51a8b27c402f278/pkg/operator/controller/controller_cabundle_configmap.go#L65-L71

I'm marking this BZ as not a blocker because it only breaks the operator (not the operand, i.e., cluster DNS service), it only breaks if the user tries to configure the new DNS-over-TLS feature, it only breaks if the user specifies a non-existent configmap, and it should resolve itself if the user reverts the problematic config or creates the missing configmap.  However, this BZ can badly impact UX for the new feature, so I'm marking it as high severity.

Comment 2 Miciah Dashiel Butler Masters 2022-06-02 15:46:55 UTC
(In reply to Miciah Dashiel Butler Masters from comment #1)
> ensureCABundleConfigMaps needs to check for haveSource == nil

Whoops, haveSource is a Boolean, but anyway, it needs to be checked in case it is false (in which case source is nil).

Comment 3 Melvin Joseph 2022-06-07 13:18:36 UTC
Team,
I check the PR in pre-merge. I can see there is no errors in the logs.
But when we give the non existing ca-bundle config, the configuration is taken for my surprise. but the config map is not showing any spec details.

melvinjoseph@mjoseph-mac Downloads % oc edit dns.operator default 
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
  creationTimestamp: "2022-06-07T11:57:03Z"
  finalizers:
  - dns.operator.openshift.io/dns-controller
  generation: 5
  name: default
  resourceVersion: "50626"
  uid: bc46a5d9-da5e-4222-ba42-a01f38edbd68
status:
  clusterDomain: cluster.local
  clusterIP: 172.30.0.10
  conditions:
  - lastTransitionTime: "2022-06-07T13:06:39Z"
    message: Enough DNS pods are available, and the DNS service has a cluster IP address.
    reason: AsExpected
    status: "False"
    type: Degraded
  - lastTransitionTime: "2022-06-07T13:06:41Z"
    message: All DNS and node-resolver pods are available, and the DNS service has
      a cluster IP address.
    reason: AsExpected
    status: "False"
    type: Progressing
  - lastTransitionTime: "2022-06-07T12:14:38Z"
    message: The DNS daemonset has available pods, and the DNS service has a cluster
      IP address.
    reason: AsExpected
    status: "True"
    type: Available
  - lastTransitionTime: "2022-06-07T11:57:03Z"
    message: DNS Operator can be upgraded
    reason: AsExpected
    status: "True"
    type: Upgradeable


melvinjoseph@mjoseph-mac Downloads % oc get cm dns-default -n openshift-dns -o yaml
apiVersion: v1
data:
  Corefile: |
    .:5353 {
        bufsize 512
        errors
        log . {
            class error
        }
        health {
            lameduck 20s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus 127.0.0.1:9153
        forward . /etc/resolv.conf {
            policy sequential
        }
        cache 900 {
            denial 9984 30
        }
        reload
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-06-07T11:57:03Z"
  labels:
    dns.operator.openshift.io/owning-dns: default
  name: dns-default
  namespace: openshift-dns
  ownerReferences:
  - apiVersion: operator.openshift.io/v1
    controller: true
    kind: DNS
    name: default
    uid: bc46a5d9-da5e-4222-ba42-a01f38edbd68
  resourceVersion: "49351"
  uid: ba7b3751-73c0-4d7d-9b6e-a2387f2e408c


melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod          
NAME                            READY   STATUS    RESTARTS   AGE
dns-operator-5c444497d5-d5ltr   2/2     Running   0          81m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator logs dns-operator-5c444497d5-d5ltr  -c dns-operator
<...snippet...>
time="2022-06-07T13:06:28Z" level=info msg="reconciling request: /default"
time="2022-06-07T13:06:39Z" level=info msg="reconciling request: /default"
time="2022-06-07T13:06:39Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 21, 0, time.Local), Reason:\"MaxUnavailableDNSPodsExceeded\", Message:\"Too many DNS pods are unavailable (2 > 1 max unavailable).\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 28, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 4 available DNS pods, want 6.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 12, 14, 38, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 11, 57, 3, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 39, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 39, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 5 available DNS pods, want 6.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 12, 14, 38, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 11, 57, 3, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}"
time="2022-06-07T13:06:39Z" level=info msg="reconciling request: /default"
time="2022-06-07T13:06:41Z" level=info msg="reconciling request: /default"
time="2022-06-07T13:06:41Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 39, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 39, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 5 available DNS pods, want 6.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 12, 14, 38, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 11, 57, 3, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 39, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 7, 13, 6, 41, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 12, 14, 38, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 7, 11, 57, 3, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}"
time="2022-06-07T13:06:41Z" level=info msg="reconciling request: /default"
time="2022-06-07T13:06:55Z" level=info msg="reconciling request: /default"

Comment 4 Chad Scribner 2022-06-07 15:45:48 UTC
> config map is not showing any spec details

That's normal. It looks like what's abnormal is the missing spec on the DNS object. Is this 100% reproducible?

Comment 5 Melvin Joseph 2022-06-07 16:40:36 UTC
I tried today with cluster bot on the your code using pre-merge verification. There it was reproduced, may be i will try tomorrow morning one more time.

Comment 7 Melvin Joseph 2022-06-08 12:26:58 UTC
Hi Chad,

as you mentioned the issue not reproducible with cluster bot now.
But today when i tested using UpstreamResolver option, i can see the ca-bundle is not taken from system certificates and the domain names are not resolving.

oc edit dns.operator default 
spec:
  logLevel: Normal
  nodePlacement: {}
  operatorLogLevel: Normal
  upstreamResolvers:
    policy: Random
    transportConfig:
      tls:
        caBundle:
          name: ca-bundle-config
        serverName: dns.mytest.ocp
      transport: TLS
    upstreams:
    - address: 10.129.2.16
      port: 853
      type: Network
status:

melvinjoseph@mjoseph-mac Downloads % oc get cm dns-default -n openshift-dns -o yaml                             
apiVersion: v1
data:
  Corefile: |
    .:5353 {
        bufsize 512
        errors
        log . {
            class error
        }
        health {
            lameduck 20s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus 127.0.0.1:9153
        forward . tls://10.129.2.16:853 {
            tls_servername dns.mytest.ocp
            tls
            policy random
        }
        cache 900 {
            denial 9984 30
        }
        reload
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-06-08T09:11:17Z"
  labels:
    dns.operator.openshift.io/owning-dns: default
  name: dns-default
  namespace: openshift-dns
  ownerReferences:
  - apiVersion: operator.openshift.io/v1
    controller: true
    kind: DNS
    name: default
    uid: 1a24c338-933e-4881-add5-c1901a2bc7fa
  resourceVersion: "82601"
  uid: 33407b26-ac18-44c0-96b4-e35ae08649d9

melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator logs dns-operator-f9d97d-dtxr9 -c dns-operator
time="2022-06-08T12:13:34Z" level=info msg="reconciling request: /default"
time="2022-06-08T12:13:34Z" level=warning msg="source ca bundle configmap ca-bundle-config does not exist"
time="2022-06-08T12:13:34Z" level=warning msg="failed to get destination ca bundle configmap ca-ca-bundle-config: configmaps \"ca-ca-bundle-config\" not found"
melvinjoseph@mjoseph-mac Downloads % 
melvinjoseph@mjoseph-mac Downloads % oc rsh centos-pod                                                          
sh-4.4# 
sh-4.4# 
sh-4.4# nslookup www.google.com
Server:		172.30.0.10
Address:	172.30.0.10#53
** server can't find www.google.com.c.openshift-gce-devel-ci.internal: SERVFAIL

sh-4.4# dig

; <<>> DiG 9.11.13-RedHat-9.11.13-6.el8_2.1 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 65518
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod            
NAME                        READY   STATUS    RESTARTS   AGE
dns-operator-f9d97d-dtxr9   2/2     Running   0          3h16m


Is this intended behaviour? I think CA should be taken from system certificates.

Comment 8 Chad Scribner 2022-06-08 16:56:48 UTC
Would you mind creating a separate BZ for that?

Comment 9 Melvin Joseph 2022-06-08 18:02:11 UTC
Sure Chad, will create a separate bug the new issue.

Comment 11 Melvin Joseph 2022-06-22 05:56:29 UTC
melvinjoseph@mjoseph-mac Downloads % oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-06-21-151125   True        False         169m    Cluster version is 4.11.0-0.nightly-2022-06-21-151125

melvinjoseph@mjoseph-mac Downloads % oc create -f centos-pod.yaml
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "centos-pod" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "centos-pod" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "centos-pod" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "centos-pod" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/centos-pod created
melvinjoseph@mjoseph-mac Downloads % oc create -f dns-server.yaml 
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "dnssrv-pod" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "dnssrv-pod" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "dnssrv-pod" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "dnssrv-pod" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/dnssrv-pod created

melvinjoseph@mjoseph-mac Downloads % oc get po -owide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE                                NOMINATED NODE   READINESS GATES
centos-pod   1/1     Running   0          26s   10.131.0.29   mjoseph-auto51-p9926-worker-79q9g   <none>           <none>
dnssrv-pod   1/1     Running   0          19s   10.131.0.30   mjoseph-auto51-p9926-worker-79q9g   <none>           <none>

melvinjoseph@mjoseph-mac Downloads % oc edit dns.operator
dns.operator.openshift.io/default edited
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator get pod
NAME                            READY   STATUS    RESTARTS   AGE
dns-operator-548b496b76-js6rh   2/2     Running   0          4h20m
melvinjoseph@mjoseph-mac Downloads % oc -n openshift-dns-operator logs dns-operator-548b496b76-js6rh  -c dns-operator
<.....snip....>
time="2022-06-22T05:35:49Z" level=info msg="updated DNS default status: old: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 22, 5, 35, 35, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 22, 5, 35, 37, 0, time.Local), Reason:\"Reconciling\", Message:\"Have 3 available DNS pods, want 5.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 22, 1, 35, 52, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 22, 1, 35, 39, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}, new: v1.DNSStatus{ClusterIP:\"172.30.0.10\", ClusterDomain:\"cluster.local\", Conditions:[]v1.OperatorCondition{v1.OperatorCondition{Type:\"Degraded\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 22, 5, 35, 35, 0, time.Local), Reason:\"AsExpected\", Message:\"Enough DNS pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Progressing\", Status:\"False\", LastTransitionTime:time.Date(2022, time.June, 22, 5, 35, 49, 0, time.Local), Reason:\"AsExpected\", Message:\"All DNS and node-resolver pods are available, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Available\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 22, 1, 35, 52, 0, time.Local), Reason:\"AsExpected\", Message:\"The DNS daemonset has available pods, and the DNS service has a cluster IP address.\"}, v1.OperatorCondition{Type:\"Upgradeable\", Status:\"True\", LastTransitionTime:time.Date(2022, time.June, 22, 1, 35, 39, 0, time.Local), Reason:\"AsExpected\", Message:\"DNS Operator can be upgraded\"}}}"
time="2022-06-22T05:35:49Z" level=info msg="reconciling request: /default"
time="2022-06-22T05:35:49Z" level=warning msg="source ca bundle configmap ca-bundle-config does not exist"
time="2022-06-22T05:35:49Z" level=warning msg="failed to get destination ca bundle configmap ca-ca-bundle-config: configmaps \"ca-ca-bundle-config\" not found"
melvinjoseph@mjoseph-mac Downloads % oc get cm dns-default -n openshift-dns -o yaml
apiVersion: v1
data:
  Corefile: |
    # test
    mytest.ocp:5353 {
        prometheus 127.0.0.1:9153
        forward . tls://10.131.0.30 {
            tls_servername dns.mytest.ocp
            tls
            policy random
        }
        errors
        log . {
            class error
        }
        bufsize 512
        cache 900 {
            denial 9984 30
        }
    }
    .:5353 {
        bufsize 512
        errors
        log . {
            class error
        }
        health {
            lameduck 20s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus 127.0.0.1:9153
        forward . /etc/resolv.conf {
            policy sequential
        }
        cache 900 {
            denial 9984 30
        }
        reload
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-06-22T01:35:39Z"
  labels:
    dns.operator.openshift.io/owning-dns: default
  name: dns-default
  namespace: openshift-dns
  ownerReferences:
  - apiVersion: operator.openshift.io/v1
    controller: true
    kind: DNS
    name: default
    uid: 36514585-c61d-43da-a088-1df71cfcad4b
  resourceVersion: "104393"
  uid: d691bdc3-0a5b-44cf-a5d7-18a46ec05224
melvinjoseph@mjoseph-mac Downloads % oc rsh centos-pod
sh-4.4# nslookup www.google.com
Server:		172.30.0.10
Address:	172.30.0.10#53

Non-authoritative answer:
Name:	www.google.com
Address: 172.217.14.196
Name:	www.google.com
Address: 2607:f8b0:400a:80a::2004

sh-4.4# netent ahosts www.google.com
sh: netent: command not found
sh-4.4# getent ahosts www.google.com
172.217.14.196  STREAM www.google.com
172.217.14.196  DGRAM  
172.217.14.196  RAW    
2607:f8b0:400a:80a::2004 STREAM 
2607:f8b0:400a:80a::2004 DGRAM  
2607:f8b0:400a:80a::2004 RAW    
sh-4.4# dig google.com

; <<>> DiG 9.11.13-RedHat-9.11.13-6.el8_2.1 <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64431
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
; COOKIE: a22f19dca7dca7c5 (echoed)
;; QUESTION SECTION:
;google.com.			IN	A

;; ANSWER SECTION:
google.com.		30	IN	A	142.250.217.110

;; Query time: 5 msec
;; SERVER: 172.30.0.10#53(172.30.0.10)
;; WHEN: Wed Jun 22 05:54:08 UTC 2022
;; MSG SIZE  rcvd: 77

sh-4.4# exit
exit


Hence verifying

Comment 13 errata-xmlrpc 2022-08-10 11:15:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.