RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1582092 - passwordMustChange attribute is not honored by a RO consumer if "Chain on Update" is implemented on the RO consumer
Summary: passwordMustChange attribute is not honored by a RO consumer if "Chain on Upd...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: pre-dev-freeze
: ---
Assignee: mreynolds
QA Contact: RHDS QE
Marc Muehlfeld
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-24 08:14 UTC by Ming Davies
Modified: 2021-12-10 16:13 UTC (History)
6 users (show)

Fixed In Version: 389-ds-base-1.3.8.4-1.el7
Doc Type: Bug Fix
Doc Text:
Using the password policy feature works correctly if "chain on update" is enabled On a Directory Server read-only consumer, the `Password must be changed after reset` password policy setting was not enforced because the flag for marking the user that must change their password is set on the connection itself. If this setting was used with the "chain on update" feature, the flag was lost. As a consequence, the password policy feature did not work. With this update, the server sets the flag on "chain on update" connections properly. As a result, the password policy feature works correctly.
Clone Of:
Environment:
Last Closed: 2018-10-30 10:13:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 2810 0 None closed passwordMustChange attribute is not honored by a RO consumer if "Chain on Update" is implemented on the RO consumer 2021-02-09 10:05:56 UTC
Red Hat Product Errata RHSA-2018:3127 0 None None None 2018-10-30 10:14:32 UTC

Description Ming Davies 2018-05-24 08:14:36 UTC
Description of problem:
passwordMustChange attribute is not honored by a RO consumer if "Chain on Update" is implemented on the RO consumer

A read-write master, a dedicated read-only consumer with "Chain on Update", plus a a dedicated read-only consumer with NO "Chain on Update".

All three instances have the same global password policy:
passwordCheckSyntax: on
passwordExp: on
passwordHistory: on
passwordInHistory: 8
passwordIsGlobalPolicy: on
passwordLegacyPolicy: off
passwordLockout: on
passwordLockoutDuration: 900
passwordMaxAge: 7776000
passwordMaxFailure: 5
passwordMaxRepeats: 5
passwordMinAge: 60
passwordMinAlphas: 1
passwordMinCategories: 3
passwordMinDigits: 1
passwordMinLength: 12
passwordMinLowers: 1
passwordMinSpecials: 1
passwordMinTokenLength: 3
passwordMinUppers: 1
passwordResetFailureCount: 600
passwordStorageScheme: SSHA512
passwordTrackUpdateTime: on
passwordUnlock: on
nsslapd-pwpolicy-local: on
passwordWarning: 1209600
passwordMustChange: on

1. Reset the user's password as the "cn=Directory manager" on the read-write master:
# ldapmodify -h localhost -p 4389  -D "cn=directory manager" -w password 
dn: uid=asmith,ou=people,dc=mytestrealm,dc=com
changetype: modify
replace: userpassword
userpassword: password

modifying entry "uid=asmith,ou=people,dc=mytestrealm,dc=com"

^C

2. ldapsearch against the read-write master as the user himself:
# ldapsearch -h localhost -p 4389  -D  "uid=asmith,ou=people,dc=mytestrealm,dc=com" -w password -b "uid=asmith,ou=people,dc=mytestrealm,dc=com"
result: 53 Server is unwilling to perform
control: 2.16.840.1.113730.3.4.4 false MA==

3. ldapsearch against the RO consumer with "chain on update" as the user himself:
# ldapsearch -h localhost -p 5389  -D  "uid=asmith,ou=people,dc=mytestrealm,dc=com" -w password -b "uid=asmith,ou=people,dc=mytestrealm,dc=com"
dn: uid=ASmith,ou=People,dc=mytestrealm,dc=com
uid: ASmith
givenName: Alan
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetorgperson
sn: Smith
cn: Alan Smith

4. ldapsearch against the RO consumer with NO "chain on update" as the user himself:
# ldapsearch -h localhost -p 6389  -D  "uid=asmith,ou=people,dc=mytestrealm,dc=com" -w password -b "uid=asmith,ou=people,dc=mytestrealm,dc=com"
result: 53 Server is unwilling to perform
control: 2.16.840.1.113730.3.4.4 false MA==



Corresponding access log:

From the read-write master:
[21/May/2018:16:17:30.383597773 +0100] conn=25 fd=66 slot=66 connection from ::1 to ::1
[21/May/2018:16:17:30.383801732 +0100] conn=25 op=0 BIND dn="cn=directory manager" method=128 version=3
[21/May/2018:16:17:30.383896655 +0100] conn=25 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000253290 dn="cn=directory manager"
[21/May/2018:16:17:38.909443446 +0100] conn=25 op=1 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.912448674 +0100] conn=25 op=1 RESULT err=0 tag=103 nentries=0 etime=0.0003588629 csn=5b02e312000000640000



From the read-only consumer with "chain on update" with access log level set at 260:
[21/May/2018:16:17:38.919160831 +0100] conn=40 fd=65 slot=65 connection from 10.44.130.162 to 10.44.130.162
[21/May/2018:16:17:38.919250651 +0100] conn=40 op=0 BIND dn="cn=replication manager,cn=config" method=128 version=3
[21/May/2018:16:17:38.919283159 +0100] conn=Internal op=-1 SRCH base="cn=replication manager,cn=config" scope=0 filter="(|(objectclass=*)(objectclass=ldapsubentry))" attrs=ALL
[21/May/2018:16:17:38.919350751 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1 etime=0.0000077639
[21/May/2018:16:17:38.919372599 +0100] conn=Internal op=-1 SRCH base="cn=replication manager,cn=config" scope=0 filter="(|(objectclass=*)(objectclass=ldapsubentry))" attrs=ALL
[21/May/2018:16:17:38.919410124 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1 etime=0.0000042291
[21/May/2018:16:17:38.919513307 +0100] conn=40 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000320207 dn="cn=replication manager,cn=config"
[21/May/2018:16:17:38.919634735 +0100] conn=40 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[21/May/2018:16:17:38.919962485 +0100] conn=Internal op=-1 SRCH base="cn=config,cn=chainonupdate,cn=chaining database,cn=plugins,cn=config" scope=1 filter="objectclass=vlvsearch" attrs=ALL
[21/May/2018:16:17:38.920097002 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=0 etime=0.0000142499
[21/May/2018:16:17:38.920111764 +0100] conn=Internal op=-1 SRCH base="cn=config,cn=userRoot,cn=ldbm database,cn=plugins,cn=config" scope=1 filter="objectclass=vlvsearch" attrs=ALL
[21/May/2018:16:17:38.920198322 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=0 etime=0.0000090905
[21/May/2018:16:17:38.920263780 +0100] conn=40 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0000683823
[21/May/2018:16:17:38.920423702 +0100] conn=40 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[21/May/2018:16:17:38.920714620 +0100] conn=Internal op=-1 SRCH base="cn=config,cn=chainonupdate,cn=chaining database,cn=plugins,cn=config" scope=1 filter="objectclass=vlvsearch" attrs=ALL
[21/May/2018:16:17:38.920827087 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=0 etime=0.0000119638
[21/May/2018:16:17:38.920841920 +0100] conn=Internal op=-1 SRCH base="cn=config,cn=userRoot,cn=ldbm database,cn=plugins,cn=config" scope=1 filter="objectclass=vlvsearch" attrs=ALL
[21/May/2018:16:17:38.920929900 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=0 etime=0.0000092706
[21/May/2018:16:17:38.920988460 +0100] conn=40 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0000634399
[21/May/2018:16:17:38.921691641 +0100] conn=40 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[21/May/2018:16:17:38.921851014 +0100] conn=Internal op=-1 SRCH base="cn=dc\3Dmytestrealm\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="objectclass=nsMappingTree" attrs="nsslapd-referral"
[21/May/2018:16:17:38.921901431 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1 etime=0.0000055624
[21/May/2018:16:17:38.921942502 +0100] conn=40 op=3 RESULT err=0 tag=120 nentries=0 etime=0.0000848189
[21/May/2018:16:17:38.922838226 +0100] conn=40 op=4 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.926511615 +0100] conn=40 op=4 RESULT err=0 tag=103 nentries=0 etime=0.0003716268 csn=5b02e312000000640000
[21/May/2018:16:17:38.928330105 +0100] conn=40 op=5 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.930459441 +0100] conn=40 op=5 RESULT err=0 tag=103 nentries=0 etime=0.0002173209 csn=5b02e312000100640000
[21/May/2018:16:17:38.931663514 +0100] conn=40 op=6 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.933516665 +0100] conn=40 op=6 RESULT err=0 tag=103 nentries=0 etime=0.0001879661 csn=5b02e312000200640000
[21/May/2018:16:17:39.024999860 +0100] conn=40 op=7 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[21/May/2018:16:17:39.026263440 +0100] conn=40 op=7 RESULT err=0 tag=120 nentries=0 etime=0.0001312651
[21/May/2018:16:17:39.028202559 +0100] conn=40 op=8 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[21/May/2018:16:17:39.028361257 +0100] conn=Internal op=-1 SRCH base="cn=dc\3Dmytestrealm\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="objectclass=nsMappingTree" attrs="nsslapd-referral"
[21/May/2018:16:17:39.028411735 +0100] conn=Internal op=-1 RESULT err=0 tag=48 nentries=1 etime=0.0000055030
[21/May/2018:16:17:39.028448250 +0100] conn=40 op=8 RESULT err=0 tag=120 nentries=0 etime=0.0000285547
[21/May/2018:16:17:39.029302653 +0100] conn=40 op=9 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[21/May/2018:16:17:39.030143911 +0100] conn=40 op=9 RESULT err=0 tag=120 nentries=0 etime=0.0000862700


From the read-only consumer WITHOUT "chain on update":
[21/May/2018:16:17:38.919176906 +0100] conn=13 fd=66 slot=66 connection from 10.44.130.162 to 10.44.130.162
[21/May/2018:16:17:38.919281859 +0100] conn=13 op=0 BIND dn="cn=replication manager,cn=config" method=128 version=3
[21/May/2018:16:17:38.919553528 +0100] conn=13 op=0 RESULT err=0 tag=97 nentries=0 etime=0.0000343231 dn="cn=replication manager,cn=config"
[21/May/2018:16:17:38.919703266 +0100] conn=13 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[21/May/2018:16:17:38.920253646 +0100] conn=13 op=1 RESULT err=0 tag=101 nentries=1 etime=0.0000623109
[21/May/2018:16:17:38.921138874 +0100] conn=13 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[21/May/2018:16:17:38.921620180 +0100] conn=13 op=2 RESULT err=0 tag=101 nentries=1 etime=0.0001266407
[21/May/2018:16:17:38.922009899 +0100] conn=13 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[21/May/2018:16:17:38.922291360 +0100] conn=13 op=3 RESULT err=0 tag=120 nentries=0 etime=0.0000325327
[21/May/2018:16:17:38.923249760 +0100] conn=13 op=4 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.925370120 +0100] conn=13 op=4 RESULT err=0 tag=103 nentries=0 etime=0.0002167514 csn=5b02e312000000640000
[21/May/2018:16:17:38.925487030 +0100] conn=13 op=5 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.929201930 +0100] conn=13 op=5 RESULT err=0 tag=103 nentries=0 etime=0.0003737963 csn=5b02e312000100640000
[21/May/2018:16:17:38.929449145 +0100] conn=13 op=6 MOD dn="uid=asmith,ou=people,dc=mytestrealm,dc=com"
[21/May/2018:16:17:38.931569940 +0100] conn=13 op=6 RESULT err=0 tag=103 nentries=0 etime=0.0002150008 csn=5b02e312000200640000
[21/May/2018:16:17:39.053424548 +0100] conn=13 op=7 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[21/May/2018:16:17:39.054552142 +0100] conn=13 op=7 RESULT err=0 tag=120 nentries=0 etime=0.0001182587
[21/May/2018:16:17:39.056654894 +0100] conn=13 op=8 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[21/May/2018:16:17:39.056973755 +0100] conn=13 op=8 RESULT err=0 tag=120 nentries=0 etime=0.0000375810
[21/May/2018:16:17:39.057790490 +0100] conn=13 op=9 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[21/May/2018:16:17:39.058564106 +0100] conn=13 op=9 RESULT err=0 tag=120 nentries=0 etime=0.0000817573
[21/May/2018:16:18:39.066061523 +0100] conn=13 op=10 UNBIND
[21/May/2018:16:18:39.066082926 +0100] conn=13 op=10 fd=66 closed - U1


As you can see from the above that the read-write master and the RO consumer with NO "chain on update" responded correctly, yet the the RO consumer with "chain on update" still allowed the user to bind as himself!

Version-Release number of selected component (if applicable):
389-ds-base-libs-1.3.7.5-18.el7.x86_64
389-ds-base-1.3.7.5-18.el7.x86_64
redhat-ds-10.1.0-2.el7dsrv.x86_64


How reproducible:
The issue can easily be reproduced


Steps to Reproduce:
1. Set up three separate instances, RW master, 2 RO consumers
2. Upload the password policy mentioned above on all three instances.
2. Configure one of the RO consumers using https://access.redhat.com/solutions/2743411 as a reference.
3. Carry out the test as shown above.

Actual results:


Expected results:


Additional info:

Comment 3 mreynolds 2018-06-04 20:55:53 UTC
I can reproduce the problem.  Opening upstream ticket...

Comment 4 mreynolds 2018-06-04 20:57:11 UTC
Upstream ticket:
https://pagure.io/389-ds-base/issue/49751

Comment 6 Akshay Adhikari 2018-08-02 13:26:38 UTC
Build tested: 389-ds-base-1.3.8.4-8.el7.x86_64
 
Setup: 1) Replication master and consumer
       2) passwordMustChange is set to on (Both master and consumer)
       3) Create Chain on Update setting as mentioned: https://access.redhat.com/solutions/2743411
       4) Restart the consumer

Master is on port: 39001
Consumer is on port:39201
 
[root@qeos-26 ~]# ldapmodify -h localhost -p 39001 -D "cn=Directory Manager" -w password -x -a << EOF
> dn: uid=adam1,ou=People,dc=example,dc=com
> changetype: modify
> replace: userpassword
> userpassword: password
> EOF
modifying entry "uid=adam1,ou=People,dc=example,dc=com"
 
[root@qeos-26 ~]# ldapsearch -h localhost -p 39001 -D  "uid=adam1,ou=People,dc=example,dc=com" -b "uid=adam1,ou=People,dc=example,dc=com" -w password
# extended LDIF
#
# LDAPv3
# base <uid=adam1,ou=People,dc=example,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
 
# search result
search: 2
result: 53 Server is unwilling to perform
control: 2.16.840.1.113730.3.4.4 false MA==
 
[root@qeos-26 ~]# ldapsearch -h localhost -p 39201 -D  "uid=adam1,ou=People,dc=example,dc=com" -b "uid=adam1,ou=People,dc=example,dc=com" -w password
# extended LDIF
#
# LDAPv3
# base <uid=adam1,ou=People,dc=example,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
 
# search result
search: 2
result: 53 Server is unwilling to perform
control: 2.16.840.1.113730.3.4.4 false MA==

Comment 11 errata-xmlrpc 2018-10-30 10:13:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3127


Note You need to log in before you can comment on or make changes to this bug.