Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1927195

Summary: sssd runs out of proxy child slots and doesn't clear the counter for Active requests
Product: Red Hat Enterprise Linux 9 Reporter: Divya Mittal <dmittal>
Component: sssdAssignee: Sumit Bose <sbose>
Status: CLOSED ERRATA QA Contact: Anuj Borah <aborah>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.0CC: aborah, aboscatt, atikhono, bthekkep, grajaiya, jhrozek, lslebodn, mzidek, pbrezina, sbose, sgadekar, thalman, tscherf
Target Milestone: betaKeywords: Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Linux   
Whiteboard: sync-to-jira
Fixed In Version: sssd-2.7.1-1.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-15 11:17:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Test build with additional decrementing of the counter
none
New test build none

Description Divya Mittal 2021-02-10 10:27:48 UTC
Description of problem:

When using authentication provider as proxy, User authentication suddenly stops working and starts working again only after restarting the sssd service.



Version-Release number of selected component (if applicable):

sssd-1.16.5-10.el7_9.6.x86_64


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:



Additional info:

1. Below error can be seen in /var/log/sssd/sssd_domain.com

---
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [sbus_get_sender_id_send] (0x2000): Not a sysbus message, quit
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [dp_pam_handler] (0x0100): Got request with the following data
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): command: SSS_PAM_AUTHENTICATE
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): domain: domain.com
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): user: jasw8470
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): service: login
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): tty:
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): ruser:
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): rhost:
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): authtok type: 1
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): newauthtok type: 0
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): priv: 1
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): cli_pid: 4159
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [pam_print_data] (0x0100): logon name: not set
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [dp_attach_req] (0x0400): DP Request [PAM Authenticate #1255]: New request. Flags [0000].
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [dp_attach_req] (0x0400): Number of active DP request: 61
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [sss_domain_get_state] (0x1000): Domain domain.com is Active
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [proxy_child_send] (0x2000): Queueing request [266]
(Fri Oct  9 11:41:49 2020) [sssd[be[domain.com]]] [proxy_child_send] (0x2000): All available child slots are full, queuing request <-----------
----

There seems to be 63 child request in Proxy queue as all requests were being killed when sssd was restarted

---
$ grep -i "Removing proxy child id" sssd_domain.com.log | wc -l
63

---

Comment 2 Sumit Bose 2021-02-10 10:39:59 UTC
Hi,

I think there is an issue were we do no decrement the counter of active request.

bye,
Sumit

Comment 3 Sumit Bose 2021-02-10 10:45:17 UTC
Created attachment 1756172 [details]
Test build with additional decrementing of the counter

Hi,

please find attached a test build which should hopefully fix the issue.

To install it, un-tar the archive, change into the new directory can call

    yum update *

bye,
Sumit

Comment 8 Sumit Bose 2021-03-02 12:18:14 UTC
Created attachment 1760197 [details]
New test build

Hi,

please find attached a new test build. Reproducing the issue in RHEL-7 was not straight forward, to me it looks like the issue should only happen on highly loaded systems.

Please let me know if this test build works better and attach new logs if the issue persists.

bye,
Sumit

Comment 9 Divya Mittal 2021-03-02 12:31:16 UTC
Hello Sumit,

Thank you. I have updated the same to the customer.

I will update you once I heard back from him.

Regards,
Divya

Comment 16 Divya Mittal 2021-04-27 08:53:45 UTC
Hello Sumit,

I have attached the new logs. Please let me know if it's enough or if we need more.

Regards,
Divya

Comment 25 Alexey Tikhonov 2022-04-21 11:55:15 UTC
Upstream PR: https://github.com/SSSD/sssd/pull/6116

Comment 26 Alexey Tikhonov 2022-05-12 12:20:25 UTC
Pushed PR: https://github.com/SSSD/sssd/pull/6116

* `master`
    * 4950bc00b6bb92a13e62da808b99ec9730aff53d - proxy: remove DP client timeout handler
    * 4af071af64593c83f3a95180b609c32c470070f6 - data_provider: add dp_client_cancel_timeout()
    * 67270a0881cfad4870d1c3929ee4eb7b640291f4 - proxy: finish request if proxy_child is terminated
    * 97eabb7ed7b67713fb6f2f27b9c5f26e99d27da8 - proxy: lower child count even if there is an error
* `sssd-2-7`
    * 7ad0a6d51a25dadf3a0406b4f77663b10a683725 - proxy: remove DP client timeout handler
    * 3cb0dda539b4098cbe7cb331b0c1cb5633945853 - data_provider: add dp_client_cancel_timeout()
    * 2e4786e70b1bb95fb36306487216c9f8adcfa415 - proxy: finish request if proxy_child is terminated
    * 90617845ced9394febb1daa8f84643950c33da67 - proxy: lower child count even if there is an error

Comment 27 Sumit Bose 2022-06-10 05:32:37 UTC
Hi,

please see https://github.com/SSSD/sssd/pull/6116#issuecomment-1105091357 for steps to verify the issue.

HTH

bye,
Sumit

Comment 33 errata-xmlrpc 2022-11-15 11:17:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sssd bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8325