Bug 2228513 - [16.2] Ephemeral heat communication is not using svc fqdn and hitting proxy
Summary: [16.2] Ephemeral heat communication is not using svc fqdn and hitting proxy
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: osp-director-operator-container
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Martin Schuppert
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 2229131
TreeView+ depends on / blocked
 
Reported: 2023-08-02 14:09 UTC by Martin Schuppert
Modified: 2023-08-22 00:09 UTC (History)
3 users (show)

Fixed In Version: osp-director-operator-container-1.3.0-8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2229131 (view as bug list)
Environment:
Last Closed: 2023-08-22 00:09:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openstack-k8s-operators osp-director-operator pull 887 0 None Merged Ephemeral heat communication use svc fqdn 2023-08-10 14:51:53 UTC
Github openstack-k8s-operators osp-director-operator pull 888 0 None Merged [v1.3.x] Ephemeral heat communication use svc fqdn 2023-08-10 14:51:49 UTC
Github openstack-k8s-operators osp-director-operator pull 889 0 None Merged Use svc fqdn for ctlplane export 2023-08-10 14:51:51 UTC
Github openstack-k8s-operators osp-director-operator pull 890 0 None Merged [v1.3.x] Use svc fqdn for ctlplane export 2023-08-10 14:51:51 UTC
Red Hat Issue Tracker OSP-27147 0 None None None 2023-08-02 14:09:55 UTC
Red Hat Product Errata RHSA-2023:4694 0 None None None 2023-08-22 00:09:52 UTC

Description Martin Schuppert 2023-08-02 14:09:40 UTC
Description of problem:
When using a cluster wide proxy the requests from heat-engine to heat-api will reach the proxy as they currently don'y use the svc fqdn.

.>.$yb..CONNECT console-openshift-console.apps.ostest.test.metalkube.org:443 HTTP/1.1
Host: console-openshift-console.apps.ostest.test.metalkube.org:443
User-Agent: Go-http-client/1.1



13:10:50.857213 IP (tos 0x0, ttl 64, id 172, offset 0, flags [DF], proto TCP (6), length 225)
    10.128.0.71.32964 > 10.23.176.2.squid: Flags [P.], cksum 0x3682 (correct), seq 2250:2423, ack 57994, win 1393, options [nop,nop,TS val 340371115 ecr 2036646675], length 173
.I..yd..GET http://heat-default:8004/v1/admin/stacks/overcloud HTTP/1.1                                                                                                                                             
Host: heat-default:8004
User-Agent: gophercloud/v1.5.0                                                            
Accept: application/json                                                                                                                                                                                            
Accept-Encoding: gzip


13:10:50.857239 IP (tos 0x0, ttl 63, id 172, offset 0, flags [DF], proto TCP (6), length 225)
    ostest-master-0.32964 > 10.23.176.2.squid: Flags [P.], cksum 0x1196 (correct), seq 2250:2423, ack 57994, win 1393, options [nop,nop,TS val 340371115 ecr 2036646675], length 173                                
E.....@.?.P...o
.I..yd..GET http://heat-default:8004/v1/admin/stacks/overcloud HTTP/1.1
Host: heat-default:8004
User-Agent: gophercloud/v1.5.0
Accept: application/json
Accept-Encoding: gzip

Version-Release number of selected component (if applicable):
ospdo 16.2-1.3.0-5

How reproducible:
always

Steps to Reproduce:
1. use cluster wide proxy
2. create openstackconfiggenerator
3. either use tcpdump or check proxy log to see it reaches the proxy

Actual results:
requests reach proxy

Expected results:
requests should not reach proxy

Additional info:
As a workaround the short names can be added to the 
spec:
  httpProxy: http://proxy:3128
  httpsProxy: http://proxy:3128
  noProxy: ...,heat-default,rabbitmq-default,mariadb-default,...

Note: -default is the name of the openstackconfiggenerator

Comment 1 Takashi Kajinami 2023-08-03 08:22:51 UTC
I'm not super familiar with OSPdO but I'm wondering if we would also want to consider disabling proxy for some containers.
My understanding is that we needed proxy in mistral-executor container so that the container can access the container images
in the external network, but the other containers such as heat container does not really require direct access to external URL
and require only internal communication with the other services.

# We can probably look into the same in normal Director deployment but I'm not too sure if we want to change the overall
# mechanism at this stage.

Comment 2 Martin Schuppert 2023-08-03 08:54:25 UTC
(In reply to Takashi Kajinami from comment #1)
> I'm not super familiar with OSPdO but I'm wondering if we would also want to
> consider disabling proxy for some containers.
> My understanding is that we needed proxy in mistral-executor container so
> that the container can access the container images
> in the external network, but the other containers such as heat container
> does not really require direct access to external URL
> and require only internal communication with the other services.
> 
> # We can probably look into the same in normal Director deployment but I'm
> not too sure if we want to change the overall
> # mechanism at this stage.

In OSPdO we don't use mistral. An ephemeral heat env is used to created the ansible playbooks and then run in a deploy pod using the tripleoclient container image. So we only have mariadb, rabbit and heat (api/engine) during this time or creating the playbooks

Comment 6 pkomarov 2023-08-15 08:55:16 UTC
Verified, by https://bugzilla.redhat.com/show_bug.cgi?id=2228513#c5

Comment 13 errata-xmlrpc 2023-08-22 00:09:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Release of containers for OSP 16.2.z (Train) director Operator), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:4694


Note You need to log in before you can comment on or make changes to this bug.