Bug 2222358

Summary: [DDF] The non-standard proxy port should be added to http_port_t instead of http_cache_port_t
Product: Red Hat Satellite Reporter: Direct Docs Feedback <ddf-bot>
Component: PulpAssignee: satellite6-bugs <satellite6-bugs>
Status: NEW --- QA Contact: Satellite QE Team <sat-qe-bz-list>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.13.0CC: agadhave, ehelms, mjivraja, saydas
Target Milestone: UnspecifiedKeywords: Documentation, Triaged
Target Release: UnusedFlags: mdolezel: needinfo? (mjivraja)
mdolezel: needinfo? (agadhave)
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Direct Docs Feedback 2023-07-12 15:32:37 UTC
The custom proxy port should be added to http_port_t instead of http_cache_port_t

This is for now just an RFE and should only be implemented after careful discussion with Sat\foreman\katello developers .

I will add more details in my next comment 

Reported by: rhn-support-saydas

https://access.redhat.com/documentation/en-us/red_hat_satellite/6.13/html/installing_satellite_server_in_a_connected_network_environment/performing-additional-configuration#annotations:e634aff2-096f-4850-ab02-95cc978ae234

Comment 1 Sayan Das 2023-07-12 15:48:22 UTC
It all started with an investigation about a selinux denial, where the end-user has a proxy server i.e. proxy.example.com and port is 3130 and as per our guide, the TCP port is already added in http_cache_port_t .

But whenever the user tries to sync any repos, using that proxy, Pulp raises this selinux denial :

####

If you believe that python3.9 should be allowed name_connect access on the port 3130 tcp_socket by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'gunicorn' --raw | audit2allow -M my-gunicorn
# semodule -X 300 -i my-gunicorn.pp


Additional Information:
Source Context                system_u:system_r:pulpcore_server_t:s0
Target Context                system_u:object_r:http_cache_port_t:s0
Target Objects                port 3130 [ tcp_socket ]
Source                        gunicorn
Source Path                   /usr/bin/python3.9
Port                          3130
Host                          satellite.example.com
Source RPM Packages
Target RPM Packages
SELinux Policy RPM            selinux-policy-targeted-3.14.3-117.el8_8.2.noarch
Local Policy RPM
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     satellite.example.com
Platform                      Linux satellite.example.com
                              4.18.0-477.15.1.el8_8.x86_64 #1 SMP Fri Jun 2
                              08:27:19 EDT 2023 x86_64 x86_64
Alert Count                   874
First Seen                    2023-02-16 12:52:34 CET
Last Seen                     2023-07-12 07:02:21 CEST
Local ID                      cb334a42-e9a7-4ebc-a8f6-8043ec9465c2

Raw Audit Messages
type=AVC msg=audit(1689138141.452:1275): avc:  denied  { name_connect } for  pid=2694 comm="gunicorn" dest=3130 scontext=system_u:system_r:pulpcore_server_t:s0 tcontext=system_u:object_r:http_cache_port_t:s0 tcl
ass=tcp_socket permissive=1

###


But I could not reproduce the issue in 6.11\6.12\6.13 despite everything is same for me and CU , so far. 

Now, I decided to check what all processes would require access to name_connect .

We have puma ( using foreman_rails_t ) which would try to connect to the proxy when we will do "Test Connection" during the creation of the HTTP proxy entry in UI. 

We have Pulp which would try to connect to the proxy when we will sync some repos through it. 

  * gunicorn processes -> pulpcore_server_t
  * pulpcore-worker processes -> pulpcore_t

Now, if I check these three contexts individually i see this:



# sesearch -A -s foreman_rails_t -p name_connect | grep http
allow foreman_rails_t http_cache_port_t:tcp_socket name_connect; [ foreman_rails_can_connect_http_proxy ]:True
allow foreman_rails_t http_port_t:tcp_socket { name_bind name_connect };
allow foreman_rails_t squid_port_t:tcp_socket name_connect; [ foreman_rails_can_connect_http_proxy ]:True


# sesearch -A -s pulpcore_server_t -p name_connect | grep http
allow pulpcore_server_t http_port_t:tcp_socket name_connect;


# sesearch -A -s pulpcore_t -p name_connect | grep http
allow pulpcore_t http_cache_port_t:tcp_socket name_connect;
allow pulpcore_t http_port_t:tcp_socket name_connect;


It basically means, 

* pulpcore_server_t is the only context that has no name_connect access on http_cache_port_t but gunicorn needs that.

* All three contexts has name_connect access on http_port_t


So I decided to add port 3130 in  http_port_t instead of http_cache_port_t and that works fine i.e. 

# semanage port -a -t http_port_t -p tcp 3130


i.e. it allows me to sync repos or to do manifest refresh or do test connections for individual proxies easily and without any denials whatsoever. 


Something similar was done via https://access.redhat.com/solutions/7014900 when the user had the port 9090 configured for his external proxy server and we fixed it by adding the port to http_port_t only. 

So, Unless there is any specific reason present why we should be adding the port to http_cache_port_t instead of http_port_t, My proposal is that we modify the doc statement. 

Of course, my understanding of selinux is not very good, So I could be wrong in many places. So I humbly request you to share your opinions on my proposal and correct me if i am wrong somewhere.

Comment 2 Sayan Das 2023-07-12 15:49:44 UTC
I have done this testing on both 6.12 and 6.13 and adding the non-standard proxy port to http_port_t works just fine, at least for pulp and manifest operations for sure.

Comment 3 Marie Hornickova 2023-07-12 17:01:32 UTC
Hello,
Many thanks for reporting the issue.
The BZ will go through proper team triage and the documentation will inform about the progress on the fix in this ticket.
Thank you!