Bug 2162552

Summary: sssd client caches old data after removing netgroup member on IDM
Product: Red Hat Enterprise Linux 9 Reporter: warren
Component: sssdAssignee: Pavel Březina <pbrezina>
Status: CLOSED ERRATA QA Contact: Madhuri <mupadhye>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.2CC: aboscatt, atikhono, pbrezina, sgadekar
Target Milestone: rcKeywords: Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: x86_64   
OS: Linux   
Whiteboard: sync-to-jira
Fixed In Version: sssd-2.9.1-1.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-07 08:54:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description warren 2023-01-19 23:04:23 UTC
Description of problem:

If I have 2 netgroups - Test1 and Test2, I made Test1 a member of Test2. The client then caches this data. After removing test1 from test2, on the client getent netgroup will display the old data, even after the timeout or after using sss_cache -E

Version-Release number of selected component (if applicable):

RHEL7.9 fully patched on the 19th Jan 2023

ssd-common-pac-1.16.5-10.el7_9.14.x86_64
sssd-proxy-1.16.5-10.el7_9.14.x86_64
sssd-client-1.16.5-10.el7_9.14.x86_64
python-sssdconfig-1.16.5-10.el7_9.14.noarch
sssd-krb5-common-1.16.5-10.el7_9.14.x86_64
sssd-ipa-1.16.5-10.el7_9.14.x86_64
sssd-krb5-1.16.5-10.el7_9.14.x86_64
sssd-1.16.5-10.el7_9.14.x86_64
sssd-common-1.16.5-10.el7_9.14.x86_64
sssd-ldap-1.16.5-10.el7_9.14.x86_64
sssd-ad-1.16.5-10.el7_9.14.x86_64


How reproducible:

Steps to Reproduce:
1. Vanilla 7.9 install. Apply patches.
2. Join IDM ( 4.9.10 )
3. Create 2 netgroups. test1 and test2. 
4. Add test1 as a member of test2
5. Create users and assign different users to test1 and test2
6. On the client, getent netgroup ttest2
7. On the IDM, remove test1 from test2
 

Actual results:

On the client, wait for timeout or use sss_cache -E. Either way, the getent netgroup still shows test2 containing the test users1

Expected results:

The client should only show the users in test2


Additional info:

[root@rhel7-debug-client ~]# sss_cache -N
[root@rhel7-debug-client ~]# getent netgroup test2
test2                 (-,bob,lab.example) (-,tuck,lab.example) (-,builder,lab.example)

# Results in this.

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): cancel ldb transaction (nesting: 1)
(2023-01-19 22:50:37): [be[lab.example]] [sysdb_add_basic_netgroup] (0x0400): Error: 17 (File exists)
(2023-01-19 22:50:37): [be[lab.example]] [sysdb_entry_attrs_diff] (0x0400): Entry [name=test1,cn=Netgroups,cn=lab.example,cn=sysdb] differs, reason: ts_cache doesn't trace this type of entry.
(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): start ldb transaction (nesting: 1)
(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): Added timed event "ldb_kv_callback": 0x55807696e000

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): Added timed event "ldb_kv_timeout": 0x55807696e0d0

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): Running timer event 0x55807696e000 "ldb_kv_callback"

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): Destroying timer event 0x55807696e0d0 "ldb_kv_timeout"

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): Destroying timer event 0x55807696e000 "ldb_kv_callback"

(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): commit ldb transaction (nesting: 1)
(2023-01-19 22:50:37): [be[lab.example]] [sysdb_set_entry_attr] (0x0200): Entry [name=test1,cn=Netgroups,cn=lab.example,cn=sysdb] has set [cache] attrs.
(2023-01-19 22:50:37): [be[lab.example]] [ldb] (0x10000): commit ldb transaction (nesting: 0)
(2023-01-19 22:50:37): [be[lab.example]] [sdap_id_op_done] (0x4000): releasing operation connection
(2023-01-19 22:50:37): [be[lab.example]] [dp_req_done] (0x0400): DP Request [Account #15]: Request handler finished [0]: Success
(2023-01-19 22:50:37): [be[lab.example]] [_dp_req_recv] (0x0400): DP Request [Account #15]: Receiving request data.
(2023-01-19 22:50:37): [be[lab.example]] [dp_req_reply_list_success] (0x0400): DP Request [Account #15]: Finished. Success.
(2023-01-19 22:50:37): [be[lab.example]] [dp_req_reply_std] (0x1000): DP Request [Account #15]: Returning [Success]: 0,0,Success
(2023-01-19 22:50:37): [be[lab.example]] [dp_table_value_destructor] (0x0400): Removing [0:1:0x0001:4::lab.example:name=test1] from reply table
(2023-01-19 22:50:37): [be[lab.example]] [dp_req_destructor] (0x0400): DP Request [Account #15]: Request removed.
(2023-01-19 22:50:37): [be[lab.example]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(2023-01-19 22:50:37): [be[lab.example]] [sdap_process_result] (0x2000): Trace: sh[0x5580768e8720], connected[1], ops[(nil)], ldap[0x558076911720]
(2023-01-19 22:50:37): [be[lab.example]] [sdap_process_result] (0x2000): Trace: end of ldap_result list

Comment 3 Pavel Březina 2023-03-31 11:48:05 UTC
Thank you and I am sorry that it took me so long to get to this bugzilla. I can reproduce the issue.

Upstream ticket:
https://github.com/SSSD/sssd/issues/6652

Comment 4 Alexey Tikhonov 2023-06-01 12:57:54 UTC
Upstream PR: https://github.com/SSSD/sssd/pull/6753

This targets 2.9+ branch, won't be fixed in 1.16 branch (that is used by RHEL7.9)

Comment 5 Alexey Tikhonov 2023-06-12 09:54:48 UTC
Pushed PR: https://github.com/SSSD/sssd/pull/6753

* `master`
    * b033b0dda972e885f63234aa81dca317c8234c2c - ipa: correctly remove missing attributes on netgroup update
* `sssd-2-9`
    * 640f41588cbe00c9f0d4e4bdfa16ac5337484b2e - ipa: correctly remove missing attributes on netgroup update

Comment 10 errata-xmlrpc 2023-11-07 08:54:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sssd bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6644