Bug 1791993 - [IPI baremetal] proxy-based install fails due to mismatch in no_proxy configuration in installer vs. cluster-network-operator
Summary: [IPI baremetal] proxy-based install fails due to mismatch in no_proxy configu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.4.0
Assignee: Stephen Benjamin
QA Contact: Victor Voronkov
URL:
Whiteboard:
Depends On:
Blocks: 1791995
TreeView+ depends on / blocked
 
Reported: 2020-01-16 20:15 UTC by Stephen Benjamin
Modified: 2020-05-04 11:25 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1791995 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:24:47 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift installer pull 2939 None closed Bug 1791993: proxy: use explicit list of platforms for metadata addresses 2020-09-09 22:46:34 UTC
Red Hat Product Errata RHBA-2020:0581 None None None 2020-05-04 11:25:19 UTC

Description Stephen Benjamin 2020-01-16 20:15:50 UTC
Description of problem:

The installer creates a manifest for proxy configuration, automatically
adding specific addresses to NO_PROXY depending on the platform. One of
those addresses is the metadata service, hosted at 169.254.169.254. The
installer assumes this must be done for all platforms other than None of
vSphere, whereas the cluster-network-operator has an explicit list of
platforms:

https://github.com/openshift/cluster-network-operator/blob/adaf257b4d63661726443ab2b059a9b4209a02d1/pkg/util/proxyconfig/no_proxy.go#L67-L69

When using a proxy with baremetal IPI, the installer adds this address,
however when the CNO comes up, it does not, causing the rendered
machine configs to differ, and installation to fail, with MCO reporting
errors like:

pool master has not progressed to latest configuration: configuration
status for pool master is empty: pool is degraded because nodes fail
with "3 nodes are reporting degraded status on sync": "Node master-1 is
reporting: \"machineconfig.machineconfiguration.openshift.io
\\\"rendered-master-982b8698753da7e31b5f902aa4dc135e\\\" not found\""



Version-Release number of the following components:

Seen in:
4.3.0-0.nightly-2020-01-11-070223
4.4.0-0.nightly-2020-01-12-221811


How reproducible:

Always

Steps to Reproduce:
1. Install baremetal IPI with a proxy enabled

Actual results:

Installation failure with these errors:

pool master has not progressed to latest configuration: configuration
status for pool master is empty: pool is degraded because nodes fail
with "3 nodes are reporting degraded status on sync": "Node master-1 is
reporting: \"machineconfig.machineconfiguration.openshift.io
\\\"rendered-master-982b8698753da7e31b5f902aa4dc135e\\\" not found\""


Expected results:

Install succeeds

Additional info:

Comment 2 Victor Voronkov 2020-02-20 10:34:35 UTC
Verified in Virtual IPv4 environment on build 4.4.0-0.nightly-2020-02-19-044512

with squid as a proxy w/o authentication as container on hypervisor

using jenkins job https://jenkins-fci-continuous-productization.cloud.paas.psi.redhat.com/job/vvoronko-test-pipeline/

[kni@provisionhost-0 ~]$ cat install-config.yaml
apiVersion: v1
baseDomain: qe.lab.redhat.com
proxy:
  httpProxy: http://192.168.123.1:3128
  httpsProxy: http://192.168.123.1:3128
  noProxy: 172.22.0.0/24
networking:
  networkType: OpenShiftSDN
  machineCIDR: 192.168.123.0/24
metadata:
  name: vvoronko-cluster
...

Deployment finished successfully

Comment 4 errata-xmlrpc 2020-05-04 11:24:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.