Bug 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive
Summary: The displayed ipv6 address of a dns upstream should be case sensitive
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.10
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.10.0
Assignee: Sherine Khoury
QA Contact: Shudi Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-13 09:39 UTC by Shudi Li
Modified: 2022-08-04 22:39 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-10 16:33:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-dns-operator pull 309 0 None open Bug 2031699: Fix CoreDNS config ipv6 addresses should be always upper… 2022-01-03 14:26:19 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:33:35 UTC

Description Shudi Li 2021-12-13 09:39:35 UTC
Description of problem:
The displayed ipv6 address of a dns upstreams is what input in editing the dns operator. For example, it is better that 1001:AAAA:bbbb:cCcC::2222 you input is changed to 1001:AAAA:BBBB:CCCC::2222 or 1001:aaaa:bbbb:cccc::2222

OpenShift release version:
- OCP 4.10.0

Cluster Platform:


How reproducible:
Edit the dns operator and add an upstream with IPv6 address 1001:AAAA:BBBB:CCCC::2222, check it in the dns operator, dns-default config map, Corefile of coredns

Steps to Reproduce (in detail):
1. Edit the dns operator, add an upstream with IPv6 address 1001:AAAA:BBBB:CCCC::2222, save and exit
% oc edit dns.operator/default
upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 5353
      type: Network

2. check upstreamResolvers in the dns operator
% oc get dns.operator/default -oyaml | grep upstreamResolvers -A7
  upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 5353
      type: Network
%

3. check upstreamResolvers in the default dns config map
% oc -n openshift-dns get cm dns-default -oyaml | grep forward -A2          
        forward . /etc/resolv.conf [1001:AAAA:bbbb:cCcC::2222]:5353 {
            policy sequential
        }
%

Actual results:
1001:AAAA:bbbb:cCcC::2222 in the show

Expected results:
1001:AAAA:BBBB:CCCC::2222 or 1001:aaaa:bbbb:cccc::2222 in the show

Impact of the problem:


Additional info:

Comment 1 Miciah Dashiel Butler Masters 2021-12-14 17:18:00 UTC
Marking as blocker- because the only issue appears to be an aesthetic one.  I'm also lowering the severity to low as the impact is pretty minor.  

Assigning to Sherine, who is working on a fix for this.

Comment 4 Shudi Li 2022-01-06 01:36:50 UTC
Verified it with 4.10.0-0.nightly-2022-01-05-181126


1.
% oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-01-05-181126   True        False         38m     Cluster version is 4.10.0-0.nightly-2022-01-05-181126
%

2. edit the dns operator and add IPv6 upstreams as below:
% oc edit dns.operator/default
  upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001::aAbc
      port: 53
      type: Network
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 53
      type: Network
    - address: 1001::dddd
      port: 5353
      type: Network
    - address: 1001::FFFF
      port: 5353
      type: Network

3. check it in the config map
% oc -n openshift-dns get cm dns-default -o yaml | more
apiVersion: v1
data:
  Corefile: |
    .:5353 {
        bufsize 512
        errors
        log . {
            class error
        }
        health {
            lameduck 20s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus 127.0.0.1:9153
        forward . /etc/resolv.conf [1001::AABC]:53 [1001:AAAA:BBBB:CCCC::2222]:53 [1001::DDDD]:5353 [1001::FFFF]:5353 {
            policy sequential
        }
        cache 900 {
            denial 9984 30
% 

4. check it in one dns pod
% oc -n openshift-dns rsh dns-default-ftzl7
Defaulted container "dns" out of: dns, kube-rbac-proxy
sh-4.4# cat /etc/coredns/Corefile 
.:5353 {
    bufsize 512
    errors
    log . {
        class error
    }
    health {
        lameduck 20s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
    }
    prometheus 127.0.0.1:9153
    forward . /etc/resolv.conf [1001::AABC]:53 [1001:AAAA:BBBB:CCCC::2222]:53 [1001::DDDD]:5353 [1001::FFFF]:5353 {
        policy sequential
    }
    cache 900 {
        denial 9984 30
    }
    reload
}
sh-4.4#

Comment 7 errata-xmlrpc 2022-03-10 16:33:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.