IPv6 protocol has been enabled by default as a part of oVirt 4.4.3 in https://gerrit.ovirt.org/109773 Unfortunately this caused issues in mixed environments, so additional detection of IPv6 protocol by iterating over engine host network interfaces and checking if IPv6 address is configured has been added (BZ1726189). Unfortunately above approach caused another issues in mixed IPv4/IPv6 setups (BZ1896779, BZ1940138 and BZ1941541), so automatic detection has been improved as a part of oVirt 4.4.6 to use only interfaces which are up in https://gerrit.ovirt.org/114519 Unfortunately even above approach was not good enough and again caused different issues in mixed setup, so hopefully here is the ultimate solution: 1. A new option pool.default.socketfactory.resolver.detectIPVersion to enable/disable automatic detection of IP protocol has been introduced: - By default set to true - Automatic detection of IP versions has been improved to use the default gateway IP addresses (if a default gateway has IPv4/IPv6 address, A/AAAA DNS records are used to resolve LDAP servers FQDNs) - If automatic detection is disabled, then administrators need to set properly below options according to their network setup 2. A new option pool.default.socketfactory.resolver.supportIPv4 to enable/disable usage of IPv4 has been introduced: - By default set to false (automatic detection is preferred) - If set to true, type "A" DNS records are used to resolve LDAP servers FQDNs 3. Existing option pool.default.socketfactory.resolver.supportIPv6 is used to enable/disable usage of IPv6 - By default set to false (in previous versions it was enabled) - If set to true, type "AAAA" DNS records are used to resolve LDAP servers FQDNs So by default automatic detection of IP version using the default gateway addresses is used, which should work for most of the setups. If there is some special mixed IPv4/IPv6 setup, where automatic detection fails, then it's needed to turn off automatic detection and configure IP versions manually as a part of /etc/ovirt-engine/aaa/<PROFILE>.properties configuration for specific LDAP setup.
When adding new AD forest that does not have any AAAA record for any domain in DNS (it used to have those but I removed the records and checked that dig does not return the AAAA records any more) it always fails to login, however if I were to add: pool.default.socketfactory.resolver.detectIPVersion = false pool.default.socketfactory.resolver.supportIPv4 = true to the config in tmp the login suddenly magically succeeds. This means there is forced use of IPv6 and no fallback to IPv4 when it fails to connect to (or does not have) IPv6 address which I would expect from 'detectIPVersion'. As changing configs in tmp during setup of new aaa server is not something that should be asked to do from anyone then that is IMO another issue with this, the setup should add the values for detectIPVersion and supportIPv4 or supportIPv6 accordingly. I tried this after adding new server to our AD forest which auto-configured with IPv6 which did not work correctly (now the AAAA records are removed but functionality is the same), however with 4.4.7 this now fails consistently whichever server (old one never had IPv6 configured) the setup uses, unless you change the configs in tmp to force IPv4. Also only relying on existing IPv6 gateway is not a good idea. Tested on ovirt-engine-extension-aaa-ldap-1.4.4-1.el8ev.noarch I have the env ready for investigation.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
(In reply to Petr Matyáš from comment #1) > When adding new AD forest that does not have any AAAA record for any domain > in DNS (it used to have those but I removed the records and checked that dig > does not return the AAAA records any more) it always fails to login, however > if I were to add: > > pool.default.socketfactory.resolver.detectIPVersion = false > pool.default.socketfactory.resolver.supportIPv4 = true > > to the config in tmp the login suddenly magically succeeds. It's not magic, it just means that there is an issue on DNS server, which timeouts when you are trying to fetch both IPv4 and IPv6 addresses at once. I'm not able to reproduce that issue with dig or host, but it's clearly visible in Java. If you change your DNS server, then this issue is not reproduced. And that issue is the main reason why I needed to add manual override of automatic IP version detection, so you would be able to overcome this issue (this was not possible in previous aaa-ldap versions). > > This means there is forced use of IPv6 and no fallback to IPv4 when it fails > to connect to (or does not have) IPv6 address which I would expect from > 'detectIPVersion'. This was never a fallback and IMO it shouldn't be. Here's a valid supported use cases: 1. Pure IPv4 only network - IPv6 is not automatically detected so only IPv4 is used 2. Pure IPv6 only network - IPv4 is not automatically detected so only IPv6 is used 3. Mixed IPv4/IPv6 network - here we need a detection and the only reliable way to find out which IP version we should use is using the default gateway addresses, which clearly defined which IP protocol version is engine able to communicate. And officially mixed IPv4/IPv6 networks are not officially supported. > As changing configs in tmp during setup of new aaa server is not something > that should be asked to do from anyone then that is IMO another issue with > this, the setup should add the values for detectIPVersion and supportIPv4 or > supportIPv6 accordingly. ovirt-engine-extension-aaa-ldap-setup is a tool only for simple and basic setup, it works fine when your DNS is configured correctly. > > I tried this after adding new server to our AD forest which auto-configured > with IPv6 which did not work correctly (now the AAAA records are removed but > functionality is the same), however with 4.4.7 this now fails consistently > whichever server (old one never had IPv6 configured) the setup uses, unless > you change the configs in tmp to force IPv4. Have you tried different DNS server? > > Also only relying on existing IPv6 gateway is not a good idea. Do you have any better idea how to reliably detect which IP version to use for DNS resolutions? > > Tested on ovirt-engine-extension-aaa-ldap-1.4.4-1.el8ev.noarch > I have the env ready for investigation.
seems it works according to the description. If you'd like it to work differently with other improvements like fallback, etc, please open a follow up bug for that, this one just tracks what's described in comment #0.
Verified in ovirt-engine-extension-aaa-ldap-setup-1.4.5-1.el8ev.noarch ovirt-engine-extension-aaa-ldap-1.4.5-1.el8ev.noarch Tested combination of pool.default.socketfactory.resolver.supportIPv6 pool.default.socketfactory.resolver.supportIPv4 pool.default.socketfactory.resolver.detectIPVersion looks like detectIPVersion is superior to disabling IPv4 and IPv6 support (parameter will enable support for both IP versions) setting all to false will cause login issues (as expected?)