Description of problem: When using a cluster wide proxy the requests from heat-engine to heat-api will reach the proxy as they currently don'y use the svc fqdn. .>.$yb..CONNECT console-openshift-console.apps.ostest.test.metalkube.org:443 HTTP/1.1 Host: console-openshift-console.apps.ostest.test.metalkube.org:443 User-Agent: Go-http-client/1.1 13:10:50.857213 IP (tos 0x0, ttl 64, id 172, offset 0, flags [DF], proto TCP (6), length 225) 10.128.0.71.32964 > 10.23.176.2.squid: Flags [P.], cksum 0x3682 (correct), seq 2250:2423, ack 57994, win 1393, options [nop,nop,TS val 340371115 ecr 2036646675], length 173 .I..yd..GET http://heat-default:8004/v1/admin/stacks/overcloud HTTP/1.1 Host: heat-default:8004 User-Agent: gophercloud/v1.5.0 Accept: application/json Accept-Encoding: gzip 13:10:50.857239 IP (tos 0x0, ttl 63, id 172, offset 0, flags [DF], proto TCP (6), length 225) ostest-master-0.32964 > 10.23.176.2.squid: Flags [P.], cksum 0x1196 (correct), seq 2250:2423, ack 57994, win 1393, options [nop,nop,TS val 340371115 ecr 2036646675], length 173 E.....@.?.P...o .I..yd..GET http://heat-default:8004/v1/admin/stacks/overcloud HTTP/1.1 Host: heat-default:8004 User-Agent: gophercloud/v1.5.0 Accept: application/json Accept-Encoding: gzip Version-Release number of selected component (if applicable): ospdo 16.2-1.3.0-5 How reproducible: always Steps to Reproduce: 1. use cluster wide proxy 2. create openstackconfiggenerator 3. either use tcpdump or check proxy log to see it reaches the proxy Actual results: requests reach proxy Expected results: requests should not reach proxy Additional info: As a workaround the short names can be added to the spec: httpProxy: http://proxy:3128 httpsProxy: http://proxy:3128 noProxy: ...,heat-default,rabbitmq-default,mariadb-default,... Note: -default is the name of the openstackconfiggenerator
I'm not super familiar with OSPdO but I'm wondering if we would also want to consider disabling proxy for some containers. My understanding is that we needed proxy in mistral-executor container so that the container can access the container images in the external network, but the other containers such as heat container does not really require direct access to external URL and require only internal communication with the other services. # We can probably look into the same in normal Director deployment but I'm not too sure if we want to change the overall # mechanism at this stage.
(In reply to Takashi Kajinami from comment #1) > I'm not super familiar with OSPdO but I'm wondering if we would also want to > consider disabling proxy for some containers. > My understanding is that we needed proxy in mistral-executor container so > that the container can access the container images > in the external network, but the other containers such as heat container > does not really require direct access to external URL > and require only internal communication with the other services. > > # We can probably look into the same in normal Director deployment but I'm > not too sure if we want to change the overall > # mechanism at this stage. In OSPdO we don't use mistral. An ephemeral heat env is used to created the ansible playbooks and then run in a deploy pod using the tripleoclient container image. So we only have mariadb, rabbit and heat (api/engine) during this time or creating the playbooks
Verified, by https://bugzilla.redhat.com/show_bug.cgi?id=2228513#c5
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Release of containers for OSP 16.2.z (Train) director Operator), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:4694