Bug 2031699

Summary: The displayed ipv6 address of a dns upstream should be case sensitive
Product: OpenShift Container Platform Reporter: Shudi Li <shudili>
Component: NetworkingAssignee: Sherine Khoury <skhoury>
Networking sub component: DNS QA Contact: Shudi Li <shudili>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: low CC: aos-bugs, hongli, mmasters
Version: 4.10   
Target Milestone: ---   
Target Release: 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-03-10 16:33:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shudi Li 2021-12-13 09:39:35 UTC
Description of problem:
The displayed ipv6 address of a dns upstreams is what input in editing the dns operator. For example, it is better that 1001:AAAA:bbbb:cCcC::2222 you input is changed to 1001:AAAA:BBBB:CCCC::2222 or 1001:aaaa:bbbb:cccc::2222

OpenShift release version:
- OCP 4.10.0

Cluster Platform:


How reproducible:
Edit the dns operator and add an upstream with IPv6 address 1001:AAAA:BBBB:CCCC::2222, check it in the dns operator, dns-default config map, Corefile of coredns

Steps to Reproduce (in detail):
1. Edit the dns operator, add an upstream with IPv6 address 1001:AAAA:BBBB:CCCC::2222, save and exit
% oc edit dns.operator/default
upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 5353
      type: Network

2. check upstreamResolvers in the dns operator
% oc get dns.operator/default -oyaml | grep upstreamResolvers -A7
  upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 5353
      type: Network
%

3. check upstreamResolvers in the default dns config map
% oc -n openshift-dns get cm dns-default -oyaml | grep forward -A2          
        forward . /etc/resolv.conf [1001:AAAA:bbbb:cCcC::2222]:5353 {
            policy sequential
        }
%

Actual results:
1001:AAAA:bbbb:cCcC::2222 in the show

Expected results:
1001:AAAA:BBBB:CCCC::2222 or 1001:aaaa:bbbb:cccc::2222 in the show

Impact of the problem:


Additional info:

Comment 1 Miciah Dashiel Butler Masters 2021-12-14 17:18:00 UTC
Marking as blocker- because the only issue appears to be an aesthetic one.  I'm also lowering the severity to low as the impact is pretty minor.  

Assigning to Sherine, who is working on a fix for this.

Comment 4 Shudi Li 2022-01-06 01:36:50 UTC
Verified it with 4.10.0-0.nightly-2022-01-05-181126


1.
% oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-01-05-181126   True        False         38m     Cluster version is 4.10.0-0.nightly-2022-01-05-181126
%

2. edit the dns operator and add IPv6 upstreams as below:
% oc edit dns.operator/default
  upstreamResolvers:
    policy: Sequential
    upstreams:
    - port: 53
      type: SystemResolvConf
    - address: 1001::aAbc
      port: 53
      type: Network
    - address: 1001:AAAA:bbbb:cCcC::2222
      port: 53
      type: Network
    - address: 1001::dddd
      port: 5353
      type: Network
    - address: 1001::FFFF
      port: 5353
      type: Network

3. check it in the config map
% oc -n openshift-dns get cm dns-default -o yaml | more
apiVersion: v1
data:
  Corefile: |
    .:5353 {
        bufsize 512
        errors
        log . {
            class error
        }
        health {
            lameduck 20s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus 127.0.0.1:9153
        forward . /etc/resolv.conf [1001::AABC]:53 [1001:AAAA:BBBB:CCCC::2222]:53 [1001::DDDD]:5353 [1001::FFFF]:5353 {
            policy sequential
        }
        cache 900 {
            denial 9984 30
% 

4. check it in one dns pod
% oc -n openshift-dns rsh dns-default-ftzl7
Defaulted container "dns" out of: dns, kube-rbac-proxy
sh-4.4# cat /etc/coredns/Corefile 
.:5353 {
    bufsize 512
    errors
    log . {
        class error
    }
    health {
        lameduck 20s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
    }
    prometheus 127.0.0.1:9153
    forward . /etc/resolv.conf [1001::AABC]:53 [1001:AAAA:BBBB:CCCC::2222]:53 [1001::DDDD]:5353 [1001::FFFF]:5353 {
        policy sequential
    }
    cache 900 {
        denial 9984 30
    }
    reload
}
sh-4.4#

Comment 7 errata-xmlrpc 2022-03-10 16:33:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056