Bug 1877443 - Post Satellite 6.8 Upgrade AD authentication via LDAP fails when using an A record which returns 42 entries
Summary: Post Satellite 6.8 Upgrade AD authentication via LDAP fails when using an A r...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: LDAP
Version: 6.8.0
Hardware: x86_64
OS: Linux
high
high vote
Target Milestone: 6.8.0
Assignee: Lukas Zapletal
QA Contact: Omkar Khatavkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-09 16:03 UTC by Jason Dickerson
Modified: 2020-10-27 13:09 UTC (History)
5 users (show)

Fixed In Version: foreman-selinux-2.1.2.3-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 13:08:57 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 30845 0 Normal Closed Allow TCP DNS query 2021-01-29 14:19:49 UTC
Red Hat Product Errata RHSA-2020:4366 0 None None None 2020-10-27 13:09:18 UTC

Description Jason Dickerson 2020-09-09 16:03:51 UTC
Description of problem:
After Upgrade from Satellite 6.7.2 to 6.8, AD users are no longer able to authenticate via LDAP.  


Version-Release number of selected component (if applicable):


How reproducible:

Very.


Steps to Reproduce:
1. Setup AD auth via LDAP using a DNS A record that returns 42 results
2. Attempt to Authenticate
3. 

Actual results:
ERF50-1006 [Foreman::WrappedException]: Unable to connect to LDAP server ([Net::LDAP::Error]: getaddrinfo: Temporary failure in name resolution)

Expected results:
You should be logged in and land on the Satellite dashboard.

Additional info:

I took a tcpdump of the DNS query initiated by the auth request and compated it to a dig command for the same A record.  The authentication dns request returned 29 A records, while the dig returns 42.  We attempted authentication using an A record returning a single result, and it worked perfectly.

Comment 10 Jason Dickerson 2020-09-14 19:17:15 UTC
I have confirmed an selinux error even after a full relabel of the filesystem:  

type=AVC msg=audit(1600099983.157:162127): avc:  denied  { name_connect } for  pid=60845 comm="diagnostic_con*" dest=53 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=system_u:object_r:dns_port_t:s0 tclass=tcp_socket permissive=0
type=AVC msg=audit(1600099983.157:162128): avc:  denied  { name_connect } for  pid=60845 comm="diagnostic_con*" dest=53 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=system_u:object_r:dns_port_t:s0 tclass=tcp_socket permissive=0

I set selinux to permissive and  it worked.  I recieved the following in the audit.log:

type=AVC msg=audit(1600110859.989:3095): avc:  denied  { name_connect } for  pid=4392 comm="diagnostic_con*" dest=53 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=system_u:object_r:dns_port_t:s0 tclass=tcp_socket permissive=1

It seems there needs to be an selinux policy update.  What is strange, is if we set the config to point to a name with a single ip address, it works.

Comment 11 Tomer Brisker 2020-09-14 19:39:17 UTC
lzap, can you please take a look? look like something in the selinux changes from 6.7 caused dns lookups to fail when there are multiple ips resolved for the ldap server hostname.

Comment 12 Jason Dickerson 2020-09-14 20:04:03 UTC
From a working Satellit 6.7.2:  

$ getsebool -a | grep foreman
httpd_run_foreman --> on
passenger_run_foreman --> on


From the 6.8 beta:  

$  getsebool -a | grep foreman
foreman_rails_can_connect_all --> off
foreman_rails_can_connect_container_unix --> on
foreman_rails_can_connect_http --> on
foreman_rails_can_connect_http_proxy --> on
foreman_rails_can_connect_ldap --> on
foreman_rails_can_connect_libvirt --> on
foreman_rails_can_connect_openstack --> on
foreman_rails_can_connect_smtp --> on
foreman_rails_can_listen_any --> off
foreman_rails_can_spawn_ssh --> on
passenger_run_foreman --> off

SELinux info about the error:  

type=AVC msg=audit(1600111635.572:3201): avc:  denied  { name_connect } for  pid=4399 comm="diagnostic_con*" dest=53 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=system_u:object_r:dns_port_t:s0 tclass=tcp_socket permissive=1
        Was caused by:
        The boolean foreman_rails_can_connect_all was set incorrectly. 
        Description:
        Allow foreman to rails can connect all
        Allow access by executing:
        # setsebool -P foreman_rails_can_connect_all 1

Comment 13 Jason Dickerson 2020-09-14 20:26:12 UTC
Compared the above to another Satellite 6.8.  The only difference is that passenger_run_foreman is on.  I tried turning it on, and tested again.  Same result.

Comment 14 Jason Dickerson 2020-09-14 20:30:59 UTC
I attempted the test with foreman_rails_can_connect_all on, and it worked.  I feel this may "enable too much", as it works for a single domain controller without that boolean enabled.

Comment 15 Lukas Zapletal 2020-09-15 07:47:27 UTC
Hello, Foreman SELinux policy maintainer here.

This was caused by our switch from Apache httpd to Puma application server. Apache was likely allowed to perform this operation, I can't find the rule in the base policy tho. Interesting fact - Satellite in their environment makes TCP request not UDP request for some reason. This is typical behavior when DNS response is larger than UDP limit - server sends special response "ask me again using TCP" and client does that.

The rule we are missing is

corenet_tcp_connect_dns_port(foreman_rails_t)

It is probably better to allow more generic

sysnet_dns_name_resolve(foreman_rails_t)

which also takes DNSSEC into account. I will prepare the patch, in the meantime you can try to compile a custom policy with the mentioned rule (sysnet_...) if you can to confirm the fix. Thank you!

Comment 16 Lukas Zapletal 2020-09-15 07:59:09 UTC
> This was caused by our switch from Apache httpd to Puma application server. Apache was likely allowed to perform this operation

Alternatively, this customers environment simply has too many DNS records and their past environment did fit into UDP packet limit. This could explain why I could not find this rule in Apache httpd policy.

Comment 17 Bryan Kearney 2020-09-15 08:03:22 UTC
Upstream bug assigned to lzap

Comment 18 Bryan Kearney 2020-09-15 08:03:24 UTC
Upstream bug assigned to lzap

Comment 19 Jason Dickerson 2020-09-15 16:16:02 UTC
In reference to comment 16, the amount of dns records has not changed.  The only change is the satellite code/selinux policy from 6.7.2 to 6.8.0 beta.  

I performed the following:  

1) manually modified the file /usr/share/doc/foreman-selinux-2.1.0/foreman.te per the PR
2) Changed line 19 as follows:

--policy_module(foreman, @@VERSION@@)
++policy_module(foreman, 2.1.0.1)

3) yum install selinux-policy-devel --disableplugin foreman-protector
3) make -f /usr/share/selinux/devel/Makefile
4) /usr/sbin/semodule -i foreman.pp
5) foreman-maintain service restart (just for good measure)

The connection test worked, so I saved the config using the 42 domain A records, and logged out and back in.  Test successful.  

Thanks for looking into this, and the quick work on a solution!

Comment 20 Bryan Kearney 2020-09-15 20:03:21 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/30845 has been resolved.

Comment 25 errata-xmlrpc 2020-10-27 13:08:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Satellite 6.8 release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:4366


Note You need to log in before you can comment on or make changes to this bug.