Description of problem: directory server log is getting flooded with the following warnings: WARN - ns_handle_pr_read_ready - Received idletime out with c->c_idletimeout as 0. Ignoring. Version-Release number of selected component (if applicable): 389-ds-base-1.3.6.2-2.fc26.x86_64 How reproducible: always Steps to Reproduce: 1. dnf update -y 2. dnf config-manager --enable=updates-testing 3. dnf install -y freeipa-server freeipa-server-dns 4. ipa-server-install -a $password -p $password --domain $domain --realm $realm -U--setup-dns --auto-forwarders Actual results: IPA serve with DNS is installed and the mentioned warning message is logged in directory server error log. Expected results: No such warning messages in error log.
Okay, I need to work out why this is looping. I think in an ideal world we would have a default timeout set so this wouldn't be an issue. I need to see why libevent is throwing a timeout event back on this when it has a 0 timeout. I'll see if I can reproduce.
Upstream ticket: https://pagure.io/389-ds-base/issue/49174
@tbordaz: see. https://pagure.io/389-ds-base/issue/49174#comment-431838
This also affects FreeIPA setup with DNS. Although SyncRepl seems to be working after the server is installed, there are the following errors in the named-pkcs11 log generated by bind-dyndb-ldap. Mar 16 14:34:35 vm.example.com named-pkcs11[18630]: LDAP error: Can't contact LDAP server: connection error Mar 16 14:34:35 vm.example.com named-pkcs11[18630]: retrying LDAP operation (modifying(replace)) on entry 'idnsname=dom.example.com.,cn=dns,dc=dom,dc=example,dc=com' Mar 16 14:34:35 vm.example.com named-pkcs11[18630]: LDAP error: Can't contact LDAP server: while modifying(replace) entry 'idnsname=dom.example.com.,cn=dns,dc=dom,dc=example,dc=com' Mar 16 14:34:35 vm.example.com named-pkcs11[18630]: unsupported operation: object class in resource record template DN 'idnsname=_ntp._udp,idnsname=idnsname=dom.example.com.,cn=dns,dc=dom,dc=example,dc=com' changed: rndc reload might be necessary If nunc-stans is turned off in cn=config, nsslapd-enable-nunc-stans: off and dirsrv and named-pkcs11 is restarted, the errors disappear and the DNS seems to work properly.
I'm going to disable nunc-stans by default and do another build. Once we get these issues sorted out we will re-enable for the next build.
Since this is not a blocker for fedora 26, I'm not doing to do another build with nunc-stans disabled.
Do we know what the practical consequences of this bug are? Our (Fedora's) automated FreeIPA tests appear to be passing now, but it would be nice to know what this might be causing...
(In reply to Adam Williamson from comment #8) > Do we know what the practical consequences of this bug are? Our (Fedora's) > automated FreeIPA tests appear to be passing now, but it would be nice to > know what this might be causing... Hi Adam, You are right, some tests can pass successfully. In syncRepl tests (DNS), syncRepl was still able to send updates although the server hit that bug. two symptoms are: - constant CPU consumption (but quite low <10%) - DS error logs flooded with warnings : ns_handle_pr_read_ready - Received idletime out with c->c_idletimeout as 0. Ignoring.
Confirming I see this in openQA testing also.
I have found the root cause of this issue, and provided a patch upstream. Mark will rebuild this for Fedora shortly for testing. https://pagure.io/389-ds-base/issue/49174
I've tested 389-ds-base-1.3.6.3-4.fc26 and I can confirm the issue has been fixed. Thanks!
389-ds-base-1.3.6.3-4.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-846f41cfeb
389-ds-base-1.3.6.3-4.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-846f41cfeb
389-ds-base-1.3.6.3-4.fc26 has been pushed to the Fedora 26 stable repository. If problems still persist, please make note of it in this bug report.