Description of problem: UI allows single IP in NO_PROXY field which is not allowed by the service/install-config Version-Release number of selected component (if applicable): Assisted Installer UI library version 1.5.12 Assisted Installer quay.io/ocpmetal/assisted-installer:0645568814701c122bed3ad9a25c9bcdf30215ef Assisted Installer Controller quay.io/ocpmetal/assisted-installer-controller:0645568814701c122bed3ad9a25c9bcdf30215ef Assisted Installer Service quay.io/ocpmetal/assisted-service:62f6a12f9790dff75397c65cfd652215ab83a088 Discovery Agent quay.io/ocpmetal/assisted-installer-agent:edbaff3f6b1343b6e51c64d461923ac592820476 How reproducible: Steps to Reproduce: 1. Create a cluster and set Proxy for it. 2. In the No Proxy field add a host an IP and a domain: registry.ocp-edge-cluster-0.qe.lab.redhat.com,ocp-edge-cluster-0.qe.lab.redhat.com,fd2e:6f44:5dd8::1,.ocp-edge-cluster-0.qe.lab.redhat.com 3. Start installation Actual results: Immediately after start cluster installation UI shows: Failed generating kubeconfig files for cluster 6c648ad3-f6a5-4c18-abd4-99bca732aa7a: exit status 1. Cluster install failed with this error in service logs : time="2021-03-16T11:26:06Z" level=error msg="level=fatal msg=failed to fetch Master Machines: failed to load asset \"Install Config\": invalid \"install-config.yaml\" file: proxy.noProxy: Invalid value: \"registry.ocp-edge-cluster-0.qe.lab.redhat.com,ocp-edge-cluster-0.qe.lab.redhat.com,fd2e:6f44:5dd8::1,.ocp-edge-cluster-0.qe.lab.redhat.com,fd01::/48,fd02::/112\": each element of noProxy must be a CIDR or domain without wildcard characters, which is violated by element 2 \"fd2e:6f44:5dd8::1\"\n" func="github.com/openshift/assisted-service/internal/ignition.(*installerGenerator).runCreateCommand" file="/go/src/github.com/openshift/origin/internal/ignition/ignition.go:965" go-id=520965 pkg=k8s-job-wrapper request_id=0f4011db-16ca-4577-9338-ea2da829450a Expected results: - Service to successfully use just an IP , without CIDR or - UI not to allow single IP in the no_proxy field Additional info: Old Openshift docs suggest that IP should be supported: https://docs.openshift.com/enterprise/3.2/install_config/http_proxies.html#configuring-no-proxy . Still could not find a newer reference for how it should be.
According to OCP docs the format of no_proxy is: "A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying." https://docs.openshift.com/container-platform/4.6/networking/enable-cluster-wide-proxy.html#enable-cluster-wide-proxy
This seems an OpenShift bug. Either in the code (makes more sense), or in in the documentation. According to https://docs.openshift.com/container-platform/4.7/installing/installing_bare_metal/installing-bare-metal.html#installation-configure-proxy_installing-bare-metal, `install-config.yaml` may include a `noProxy` entry, which is "A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations." The documentation for both 4.7 and 4.6 states this. However, when I create an `install-config.yaml` that has an IPv6 address listed in `noProxy`, I get > ./openshift-install create ignition-configs > FATAL failed to fetch Kubeconfig Admin Client: failed to load asset "Install Config": invalid "install-config.yaml" file: proxy.noProxy[0]: Invalid value: "1001:db8::1": must be a CIDR or domain, without wildcard characters This does not happen with IPv6 CIDRs though, e.g. 1001:db8::0/120 passes fine.
Invoking `openshift-install/openshift-baremetal-install` is exactly what Assisted Installer does in the case described in the bug.
Using an IPv4 address doesn't cause errors either.
The cluster-network-operator does not accept IPv6 addresses as valid NoProxy entries. The IPv4 addresses are accepted because they pass the regex to be considered domain names. If the correct behavior is to accept IPv6 addresses, then please open a bug against the installer once the changes are made to cluster-network-operator.
The workaround at the moment is to specify any IPv6 address as /128. Dropping the severity given there is a workaround.
Verified with 4.10.0-0.nightly-2021-09-15-220746 and passed. ### reproduced with old build without the fix and got error log: $ oc -n openshift-network-operator logs network-operator-f8478859-s6bjt | grep validate I0916 02:15:17.504205 1 log.go:184] Failed to validate proxy 'cluster': invalid noProxy: fd2e:6f44:5dd8::1 message: 'The configuration is invalid for proxy ''cluster'' (invalid noProxy: fd2e:6f44:5dd8::1). message: 'The configuration is invalid for proxy ''cluster'' (invalid noProxy: fd2e:6f44:5dd8::1). I0916 02:15:17.538812 1 log.go:184] Failed to validate proxy 'cluster': invalid noProxy: fd2e:6f44:5dd8::1 ### verified with 4.10.0-0.nightly-2021-09-15-220746 and got below results $ oc get proxies.config.openshift.io cluster -oyaml <---snip---> spec: httpProxy: http://user:xxxx@10.0.99.4:3128 httpsProxy: http://user:xxxx@10.0.99.4:3128 noProxy: test.no-proxy.com,registry.ocp-edge-cluster-0.qe.lab.redhat.com,ocp-edge-cluster-0.qe.lab.redhat.com,fd2e:6f44:5dd8::1,.ocp-edge-cluster-0.qe.lab.redhat.com trustedCA: name: "" status: httpProxy: http://user:xxxx@10.0.99.4:3128 httpsProxy: http://user:xxxx@10.0.99.4:3128 noProxy: .cluster.local,.ocp-edge-cluster-0.qe.lab.redhat.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.hongli-bv.qe.azure.devcluster.openshift.com,fd2e:6f44:5dd8::1,localhost,ocp-edge-cluster-0.qe.lab.redhat.com,registry.ocp-edge-cluster-0.qe.lab.redhat.com,test.no-proxy.com $ oc -n openshift-network-operator logs network-operator-7b6bf9c59c-5xkfg | grep validate (no output) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2021-09-15-220746 True False 54m Cluster version is 4.10.0-0.nightly-2021-09-15-220746
Hi, if there is anything that customers should know about this bug or if there are any important workarounds that should be outlined in the bug fixes section OpenShift Container Platform 4.10 release notes, please update the Doc Type and Doc Text fields. If not, can you please mark it as "no doc update"? Thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056