Description of problem: A customer has reported an issue with "openstack baremetal node provide <id>". The deployment has clean_nodes enabled thus ironic first tries to clean the baremetal node, however the cleaning process fails and the baremetal node becomes clean failed status. While debugging the issue, we found that http request is sent from ironic-conductor to overcloud nodes during node cleaning process and the request was directed to http proxy. This makes ironic get the wrong response from http proxy which results in json decode error in Ironic. In the deployment the http proxy settings are applied in /etc/environments according to our current guide. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/installing-the-undercloud#considerations-when-running-the-undercloud-with-a-proxy These setting were baked into Env parameter of podman containers, thus all processs inside containers refer that environment variables. We would be able to avoid the problem by adding these ips to no_proxy, but it is not very feasible option in large deployment since we need to put all ips in dhcp range of provisioning networks. IMO http proxy settings should be filtered for ironic-conductor container (and some other containers if needed) which sends http requests not only to endpoint url but also to other ips. # AFAIK we don't need http proxy except for mistral containers which would execute container image prepare # thus we might be able to filter out http_proxy and the other options for most of containers. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Set http proxy in /etc/environment 2. Install undercloud with clean_nodes=True 3. Import baremetal nodes 4. Run "openstack overcloud node introspect --all-manageable" 5. Run "openstack baremetal node provide <id>" Actual results: The baremetal node becomes clean failed status Expected results: The baremetal node becomes available status Additional info:
The usage of /etc/environment is not recommended. Is there a specific set of services that need proxy access? *** This bug has been marked as a duplicate of bug 1916070 ***
@Alex AS I mentioned in bz 1916070, we need to use /etc/environment in undercloud so that the node can pull containers from our CDN via http proxy. IIUC we have two points (standalone deployment to install undercloud, and mistral workflow execution to install overcloud) when we run container prepare workflow and these 2 needs proper settings about http proxy. Note that usage of /etc/environment is what is exactly described in our current documentation as the way to set up http proxy for undercloud. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/director_installation_and_usage/installing-the-undercloud#considerations-when-running-the-undercloud-with-a-proxy
You don't though. You can use the http proxy for the undercloud installation but to pre-load all the containers for the overcloud deployment, you can use the `openstack tripleo container image prepare` manually to by pass this and use push_destination: true to load them on the undercloud. $ openstack tripleo container image prepare default --local-push-destination >container-image-prepare.yaml $ modify container-image-prepare.yaml as needed $ export http_proxy=x.x.x.x:3128 $ export https_proxy=$http_proxy $ openstack tripleo container image prepare -e container-image-prepare.yaml This should result in the containers being fetched via proxy and loaded on the undercloud for use in the overcloud deployment. In 16.1.2 and onward, if you use zstream tag (eg tag: 16.x.x) the deployment will skip the tag look ups on the overcloud deployment and won't need to query the registry.