Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1483681 - Crash while binding to a server during replication online init
Crash while binding to a server during replication online init
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base (Show other bugs)
7.4
All Linux
urgent Severity urgent
: rc
: ---
Assigned To: mreynolds
Viktor Ashirov
Marc Muehlfeld
: ZStream
Depends On:
Blocks: 1483865
  Show dependency treegraph
 
Reported: 2017-08-21 13:26 EDT by mreynolds
Modified: 2018-04-10 10:20 EDT (History)
4 users (show)

See Also:
Fixed In Version: 389-ds-base-1.3.7.5-4.el7
Doc Type: Bug Fix
Doc Text:
Directory Server now handles binds during an online initialization correctly During an online initialization from one Directory Server master to another, the master receiving the changes is temporarily set into a referral mode. While in this mode, the server only returns referrals. Previously, Directory Server incorrectly generated these bind referrals. As a consequence, the server could terminate unexpectedly in the mentioned scenario. With this update, the server correctly generates bind referrals. As a result, the server now correctly handles binds during an online initialization.
Story Points: ---
Clone Of:
: 1483865 (view as bug list)
Environment:
Last Closed: 2018-04-10 10:19:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0811 None None None 2018-04-10 10:20 EDT

  None (edit)
Description mreynolds 2017-08-21 13:26:00 EDT
Description of problem:

Crash occurs when master A is initializing master B, and at that same time a user binds against master which can cause the server to crash.

Two faults were found in the handling of the mapping
tree of 389 directory server. The first fault was that the tree-free
check was not performed atomically and may cause an incorrect operations
error to be returned. The second was that during a total init the referral
would not lock the be, but the pw_verify code assumed a be was locked.
This caused a segfault.

Fix the freed check to use atomics. Fix the pw_verify
to assert be is NULL (which is correct, there is no backend).

Version-Release number of selected component (if applicable):

389-ds-base.1.3.6


How reproducible:

CI testcase: 

dirsrvtests/tests/suites/mapping_tree/referral_during_tot_init.py
Comment 2 mreynolds 2017-08-21 13:27:23 EDT
Upstream ticket:

https://pagure.io/389-ds-base/issue/49356
Comment 5 Simon Pichugin 2017-11-13 09:29:18 EST
======================= test session starts =======================
platform linux -- Python 3.5.1, pytest-3.2.3, py-1.4.34, pluggy-0.4.0 -- /opt/rh/rh-python35/root/usr/bin/python3
cachedir: .cache
metadata: {'Packages': {'pluggy': '0.4.0', 'pytest': '3.2.3', 'py': '1.4.34'}, 'Plugins': {'metadata': '1.5.0', 'html': '1.16.0'}, 'Python': '3.5.1', 'Platform': 'Linux-3.10.0-768.el7.x86_64-x86_64-with-redhat-7.5-Maipo'}
389-ds-base: 1.3.7.5-9.el7
nss: 3.34.0-0.1.beta1.el7
nspr: 4.17.0-1.el7
openldap: 2.4.44-9.el7
svrcore: 4.1.3-2.el7

rootdir: /mnt/tests/rhds/tests/upstream/ds, inifile:
plugins: metadata-1.5.0, html-1.16.0
collected 1 item

dirsrvtests/tests/suites/mapping_tree/referral_during_tot_init.py::test_referral_during_tot PASSED

======================= 1 passed in 39.96 seconds =======================

Marking as VERIFIED.
Comment 12 errata-xmlrpc 2018-04-10 10:19:40 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0811

Note You need to log in before you can comment on or make changes to this bug.