Bug 1113022 - 389-ds production segfault: __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:144
Summary: 389-ds production segfault: __memcpy_sse2_unaligned () at ../sysdeps/x86_64/m...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Fedora
Classification: Fedora
Component: 389-ds-base
Version: 20
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Rich Megginson
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-25 09:55 UTC by William Brown
Modified: 2020-09-13 21:08 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-07-09 18:23:48 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
thread apply all bt full stacktrace extract (18.96 KB, text/plain)
2014-06-25 09:55 UTC, William Brown
no flags Details
Valgrind trace of slapd during sigsegv (16.00 KB, text/plain)
2014-06-26 03:45 UTC, William Brown
no flags Details
Valgrind trace of slapd during sigsegv (16.00 KB, text/plain)
2014-06-27 00:58 UTC, William Brown
no flags Details
test repo file (215 bytes, text/plain)
2014-07-06 01:10 UTC, Noriko Hosoi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1170 0 None None None 2020-09-13 21:08:34 UTC

Description William Brown 2014-06-25 09:55:18 UTC
Created attachment 912004 [details]
thread apply all bt full stacktrace extract

Description of problem:
389-ds-base in a freeipa multimaster replica is segfaulting. The host with the issue is attempting to replicate from the other master. The bt is listed as:

Program terminated with signal SIGSEGV, Segmentation fault.
#0  __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:144
144             movzbl  (%rsi), %eax


#0  __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:144
#1  0x00007fbf94e74a1e in memcpy (__len=1, __src=<optimized out>, __dest=<optimized out>) at /usr/include/bits/string3.h:51
#2  ber_bvcpy (bvd=bvd@entry=0x7fbf54012cc0, bvs=bvs@entry=0x7fbf66ff3b20) at ldap/servers/slapd/value.c:75
#3  0x00007fbf94e74c31 in ber_bvcpy (bvs=0x7fbf66ff3b20, bvd=0x7fbf54012cc0) at ldap/servers/slapd/value.c:359
#4  slapi_value_set_berval (value=value@entry=0x7fbf54012cc0, bval=bval@entry=0x7fbf66ff3b20) at ldap/servers/slapd/value.c:361
#5  0x00007fbf94e74c86 in value_init (v=v@entry=0x7fbf54012cc0, bval=bval@entry=0x7fbf66ff3b20, t=t@entry=1 '\001', csn=csn@entry=0x0) at ldap/servers/slapd/value.c:207
#6  0x00007fbf94e74ce2 in value_new (bval=0x7fbf66ff3b20, t=t@entry=1 '\001', csn=csn@entry=0x0) at ldap/servers/slapd/value.c:184
#7  0x00007fbf94e74d2c in slapi_value_new_berval (bval=<optimized out>) at ldap/servers/slapd/value.c:116
#8  0x00007fbf94e758f5 in valuearray_init_bervalarray (bvals=bvals@entry=0x7fbf66ff3b30, cvals=cvals@entry=0x7fbf66ff39d0) at ldap/servers/slapd/valueset.c:251
#9  0x00007fbf94e0d6f1 in slapi_entry_add_values (e=e@entry=0x7fbf54012ba0, type=0x7fbf87b9371d "changes", vals=vals@entry=0x7fbf66ff3b30) at ldap/servers/slapd/entry.c:3660
#10 0x00007fbf87b91c48 in modrdn2reple (newsuperior=0x0, ldm=0x7fbf5400b520, deloldrdn=0, newrdn=0x7fbf5401c9c0 "idnsName=maddy+nsuniqueid=773ba311-e91c11e3-9cf8e4c9-853fe36c", e=0x7fbf54012ba0) at ldap/servers/plugins/retrocl/retrocl_po.c:546
#11 write_replog_db (newsuperior=0x0, modrdn_mods=0x7fbf5400b520, newrdn=0x7fbf5401c9c0 "idnsName=maddy+nsuniqueid=773ba311-e91c11e3-9cf8e4c9-853fe36c", log_e=0x0, curtime=1403656344, flag=0, log_m=0x0, dn=0x7fbf54005c70 "idnsName=maddy,idnsname=ipa.blackhats.net.au,cn=dns,dc=ipa,dc=blackhats,dc=net,dc=au", optype=<optimized out>, pb=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:339
#12 retrocl_postob (pb=<optimized out>, optype=<optimized out>) at ldap/servers/plugins/retrocl/retrocl_po.c:668
#13 0x00007fbf94e47095 in plugin_call_func (list=0x7fbf96092d10, operation=operation@entry=562, pb=pb@entry=0x7fbf54012840, call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:1489
#14 0x00007fbf94e47248 in plugin_call_list (pb=0x7fbf54012840, operation=562, list=<optimized out>) at ldap/servers/slapd/plugin.c:1451
#15 plugin_call_plugins (pb=pb@entry=0x7fbf54012840, whichfunction=whichfunction@entry=562) at ldap/servers/slapd/plugin.c:413
#16 0x00007fbf89314f4d in ldbm_back_modrdn (pb=<optimized out>) at ldap/servers/slapd/back-ldbm/ldbm_modrdn.c:1117
#17 0x00007fbf94e393e7 in op_shared_rename (pb=pb@entry=0x7fbf54012840, passin_args=0) at ldap/servers/slapd/modrdn.c:652
#18 0x00007fbf94e395c5 in rename_internal_pb (pb=pb@entry=0x7fbf54012840) at ldap/servers/slapd/modrdn.c:392
#19 0x00007fbf94e39d1a in slapi_modrdn_internal_pb (pb=pb@entry=0x7fbf54012840) at ldap/servers/slapd/modrdn.c:330
#20 0x00007fbf8906b7c9 in urp_fixup_rename_entry (entry=entry@entry=0x7fbf5402a780, newrdn=0x7fbf5401c190 "idnsName=maddy+nsuniqueid=773ba311-e91c11e3-9cf8e4c9-853fe36c", opflags=opflags@entry=0) at ldap/servers/plugins/replication/urp.c:843
#21 0x00007fbf8906bbba in urp_annotate_dn (sessionid=sessionid@entry=0x7fbf66ff6410 "conn=7 op=6 csn=53783f00000000040000", entry=0x7fbf5402a780, opcsn=0x7fbf54004ec0, optype=optype@entry=0x7fbf89096083 "ADD") at ldap/servers/plugins/replication/urp.c:1041
#22 0x00007fbf8906c6fb in urp_add_operation (pb=pb@entry=0x7fbf66ffcae0) at ldap/servers/plugins/replication/urp.c:253
#23 0x00007fbf890548e8 in multimaster_bepreop_add (pb=0x7fbf66ffcae0) at ldap/servers/plugins/replication/repl5_plugins.c:755
#24 0x00007fbf94e47095 in plugin_call_func (list=0x7fbf9607a030, operation=operation@entry=450, pb=pb@entry=0x7fbf66ffcae0, call_one=call_one@entry=0) at ldap/servers/slapd/plugin.c:1489
#25 0x00007fbf94e47248 in plugin_call_list (pb=0x7fbf66ffcae0, operation=450, list=<optimized out>) at ldap/servers/slapd/plugin.c:1451
#26 plugin_call_plugins (pb=pb@entry=0x7fbf66ffcae0, whichfunction=whichfunction@entry=450) at ldap/servers/slapd/plugin.c:413
#27 0x00007fbf892f8aaf in ldbm_back_add (pb=0x7fbf66ffcae0) at ldap/servers/slapd/back-ldbm/ldbm_add.c:336
#28 0x00007fbf94df241a in op_shared_add (pb=pb@entry=0x7fbf66ffcae0) at ldap/servers/slapd/add.c:735
#29 0x00007fbf94df3750 in do_add (pb=pb@entry=0x7fbf66ffcae0) at ldap/servers/slapd/add.c:258
#30 0x00007fbf9530aca4 in connection_dispatch_operation (pb=0x7fbf66ffcae0, op=0x7fbf961fece0, conn=0x7fbf800cb410) at ldap/servers/slapd/connection.c:645
#31 connection_threadmain () at ldap/servers/slapd/connection.c:2534
#32 0x00007fbf9321be5b in _pt_root (arg=0x7fbf961f74d0) at ../../../nspr/pr/src/pthreads/ptthread.c:212
#33 0x00007fbf92bbbf33 in start_thread (arg=0x7fbf66ffd700) at pthread_create.c:309
#34 0x00007fbf928e9ded in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

As per http://directory.fedoraproject.org/wiki/FAQ#Debugging_Crashes I have attached a full stacktrace relating to this thread. If needed I can provide the coredump as well.

Version-Release number of selected component (if applicable):
1.3.2.16-1.fc20.x86_64

How reproducible:
Always (with my dataset)

Steps to Reproduce:
1. Start up 389-ds
2. Trigger a replication
3. Wait

Comment 1 Noriko Hosoi 2014-06-25 18:20:17 UTC
> How reproducible:
> Always (with my dataset)

It looks you are facing some memory corruption.  Is it possible to run the server via valgrind, make the crash happen and share the output with us?

> Steps to Reproduce:
> 1. Start up 389-ds
> 2. Trigger a replication
> 3. Wait

Comment 2 William Brown 2014-06-26 00:48:45 UTC
I'm happy to do this for you, but would like some advice on the best way to do this. My initial thought was to add valgrind to the systemd unit file, and to then capture the units stdout to send you? Does this seem like a reasonable approach?

Comment 3 Noriko Hosoi 2014-06-26 01:18:26 UTC
(In reply to William Brown from comment #2)
> I'm happy to do this for you, but would like some advice on the best way to
> do this. My initial thought was to add valgrind to the systemd unit file,
> and to then capture the units stdout to send you? Does this seem like a
> reasonable approach?

Sorry, I'm running valgrind with this less elegant way...

I create a /usr/sbin/start-dirsrv.val file as follows:
$ diff -twU4 /usr/sbin/start-dirsrv /usr/sbin/start-dirsrv.val
--- /usr/sbin/start-dirsrv	2014-06-25 15:36:50.000000000 -0700
+++ /usr/sbin/start-dirsrv.val	2014-02-11 15:14:02.733924900 -0800
@@ -63,19 +63,13 @@
     #
     # Use systemctl if available and running as root, 
     # otherwise start the instance the old way.
     #
-    if [ -d "/usr/lib/systemd/system" ] && [ "$(id -u)" == "0" ];then
-        /usr/bin/systemctl start dirsrv@$SERV_ID.service
+    export USE_VALGRIND=1
+    cd $SERVERBIN_DIR; valgrind --log-file=/tmp/slapd.$$.out --num-callers=32 --tool=memcheck --leak-check=full --show-reachable=yes --leak-resolution=high ./ns-slapd -D $CONFIG_DIR -i $PIDFILE -w $STARTPIDFILE "$@" -d 0
         if [ $? -ne 0 ]; then
             return 1
         fi
-    else
-        cd $SERVERBIN_DIR; ./ns-slapd -D $CONFIG_DIR -i $PIDFILE -w $STARTPIDFILE "$@"
-        if [ $? -ne 0 ]; then
-            return 1
-        fi
-    fi
     loop_counter=1
     # wait for 10 seconds for the start pid file to appear
     max_count=${STARTPID_TIME:-10}
     while test $loop_counter -le $max_count; do
==================================================
Stop the server, then restart the server as follows:
# /usr/sbin/start-dirsrv.val -d /etc/sysconfig <ID>
where <ID> is the <ID> part of your slapd-<ID>.

It generates /tmp/slapd.<PID>.out to store the valgrind output.

Please do this part.
2. Trigger a replication
3. Wait

Thanks!

Comment 4 William Brown 2014-06-26 03:45:06 UTC
Created attachment 912342 [details]
Valgrind trace of slapd during sigsegv

After a bit of fighting with keytabs (To actually allow the replication to proceed) I have gotten the valgrind trace as asked. The output is cut off at the tail due to the coredump filling /tmp, but the important components are there.

Comment 5 Noriko Hosoi 2014-06-26 16:40:34 UTC
Ah, the problem is happening in schema compat plugin, which belongs to NIS plugin package.

Could you please install slapi-nis-debuginifo?  Also, installing 389-ds-base-debuginfo should help our debugging.

==10123== Invalid read of size 4
==10123==    at 0x15177FA4: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x15178424: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x1518617E: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x15176A2A: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x15177B85: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x15177CE0: ??? (in /usr/lib64/dirsrv/plugins/schemacompat-plugin.so)
==10123==    by 0x4CAA094: plugin_call_func (plugin.c:1489)

So, could you please run:
# yum install slapi-nis-debuginifo 389-ds-base-debuginfo

And repeat the test?  Thank you so much!!

Comment 6 William Brown 2014-06-27 00:58:01 UTC
Created attachment 912640 [details]
Valgrind trace of slapd during sigsegv

With extra debug info installed as per request.

Comment 7 Noriko Hosoi 2014-07-02 21:11:42 UTC
Upstream ticket:
https://fedorahosted.org/389/ticket/47839

Comment 8 Noriko Hosoi 2014-07-04 00:37:18 UTC
Hello William,

We think we fixed the crash bug you ran into.  We haven't officially announced the 389-ds-base-1.3.2.19-1.fc20 release, which is still internally tested.  But could it be possible for you to check the new build whether it fixes your problem or not?

The build is found here:
http://koji.fedoraproject.org/koji/buildinfo?buildID=541873

For instance, if your system's architecture is x86_64, please download these 3 rpms from the download link.
x86_64 	(build logs)
	389-ds-base-1.3.2.19-1.fc20.x86_64.rpm (info) (download)
	389-ds-base-libs-1.3.2.19-1.fc20.x86_64.rpm (info) (download)
	389-ds-base-debuginfo-1.3.2.19-1.fc20.x86_64.rpm (info) (download)
Then, install them with "rpm -U <rpm files>".

Thank you for your help, in advance,
--noriko

Comment 9 William Brown 2014-07-04 06:32:26 UTC
Not sure if related to this change, but on server start up I now get:

n=config".
[04/Jul/2014:15:56:19 +091800] NSMMReplicationPlugin - agmtlist_config_init: found 2 replication agreements in DIT
[04/Jul/2014:15:56:19 +091800] - Entry "cn=replSchema,cn=config" has unknown object class "nsSchemaPolicy"
[04/Jul/2014:15:56:19 +091800] NSMMReplicationPlugin - Warning: unable to create configuration entry cn=replSchema, cn=config: Object class violation
[04/Jul/2014:15:56:19 +091800] - Failed to start object plugin Multimaster Replication Plugin
[04/Jul/2014:15:56:19 +091800] - Error: Failed to resolve plugin dependencies
[04/Jul/2014:15:56:19 +091800] - Error: preoperation plugin IPA Version Replication is not started
[04/Jul/2014:15:56:19 +091800] - Error: object plugin Legacy Replication Plugin is not started
[04/Jul/2014:15:56:19 +091800] - Error: object plugin Multimaster Replication Plugin is not started


Which means I can't actually test this fix .... After a yum downgrade, the server starts correctly, as expected (But of course, can't replicate)

Comment 10 William Brown 2014-07-04 06:36:56 UTC
For now I have downgraded and set nsds5ReplicaEnabled: off on the agreement that causes the segfault.

Comment 11 Noriko Hosoi 2014-07-04 23:44:56 UTC
Sorry about the inconvenience.  I'm curious why you got this error.

[04/Jul/2014:15:56:19 +091800] - Entry "cn=replSchema,cn=config" has unknown object class "nsSchemaPolicy"

Unfortunately, you downgraded the server, so you have no snapshot, but does your 01core389.ldif in you /etc/dirsrv/slapd-ID/schema have the following line?
objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )

If not, we have to figure out why the schema file was not replaced...  rpm -U is supposed to run an upgrade script which list the schema file 01core389.ldif to be upgraded...

Comment 12 William Brown 2014-07-05 00:32:15 UTC
I still have the rpms though:

New RPM (extracted):
cat etc/dirsrv/schema/01core389.ldif | grep -i nsSchemaPolicy
objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )

Existing 389:
cat /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif | grep -i nsSchemaPolicy
cat /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/schema/* | grep -i nsSchemaPolicy

When I run the upgrade, after that I get:

cat /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif | grep -i nsSchemaPolicy
cat /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/schema/* | grep -i nsSchemaPolicy

If I copy the new 01core389.ldif into my instance, it starts correctly. and shows the nsSchemaPolicy.

So I assume there is an issue upgrade 01core389.ldif into the instances that exist on the server ...

Comment 13 Noriko Hosoi 2014-07-05 02:17:10 UTC
I tested the upgrade on my VM as follows.

1) Install 1.3.2.16 rpm packages.
# rpm -ivh 389-ds-base-1.3.2.16-1.fc20.x86_64.rpm 389-ds-base-libs-1.3.2.16-1.fc20.x86_64.rpm
2) Set up DS and configure 2-way MMR
The config files do not have replSchema entry.
# egrep replSchema /etc/dirsrv/slapd-vm-042M*/dse.ldif
#
3) Upgrade to 1.3.2.19
# rpm -Uvh 389-ds-base-1.3.2.19-1.fc20.x86_64.rpm 389-ds-base-libs-1.3.2.19-1.fc20.x86_64.rpm 389-ds-base-debuginfo-1.3.2.19-1.fc20.x86_64.rpm
The config fils have replSchema entry.
# egrep replSchema /etc/dirsrv/slapd-vm-042M*/dse.ldif
/etc/dirsrv/slapd-vm-042M0/dse.ldif:dn: cn=replSchema,cn=config
[...]
/etc/dirsrv/slapd-vm-042M1/dse.ldif:dn: cn=replSchema,cn=config
[...]
And there was no problem to restart the servers.

Do you see any differences from your upgrade scenario?

Comment 14 William Brown 2014-07-05 06:13:32 UTC
I did:

* Stop dirsrv instance
* yum upgrade 389-base 389-base-libs (from koji build)
* Start dirsrv instance

Causes the error above if I don't copy 01core389.ldif into the slapd-instance schema directory.

After the upgrade (and moving the 01core389.ldif) into place:

egrep replSchema /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/dse.ldif
dn: cn=replSchema,cn=config
cn: replSchema
dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config

An old dse.ldif from before this procedure shows no output from the egrep line. I still have another system which has *not* been upgraded to the new 389-base package if you want me to test this ... 

Back to the original bug: I have now enabled the replication agreement once more, and am not seeing the crash, so this has fixed the original issue.

Comment 15 Noriko Hosoi 2014-07-06 01:10:21 UTC
Created attachment 914894 [details]
test repo file

I still cannot reproduce your problem...  The attached file "nhosoi-f20.repo" is a repo file pointing the test repo that contains my local build built from the same src rpm I asked you to test.

I installed 1.3.2.16, then upgraded to 1.3.2.19 from the test repo:
http://copr-be.cloud.fedoraproject.org/results/nhosoi/389-ds-f20/fedora-$releasever-$basearch/

Once upgrade was done, nsSchemaPolicy was placed in 01core389.ldif and restarting the server was successful for me...

Could you please try "yum upgrade" once again having nhosoi-f20.repo in your /etc/yum.repo.d directory?

Please note that when our test is done, the repo is going to be deleted...

Thanks for your help.

Comment 16 William Brown 2014-07-06 06:05:58 UTC
Here is an *exact* transcript of my actions on the second IPA domain controller (The one that did not exhibit the segfault, and had not yet been upgraded). This was to try and eliminate some other variable from this issue.

[root@petunia ~]# systemctl stop dirsrv 
[root@petunia ~]# vim /etc/yum.repos.d/nhosoi.repo
[root@petunia ~]# yum makecache
(1/7): nhosoi-389-ds-f20/20/x86_64/filelists_db                                                                                                                        |  12 kB  00:00:00     
(2/7): nhosoi-389-ds-f20/20/x86_64/primary_db                                                                                                                          |  10 kB  00:00:00     
(3/7): nhosoi-389-ds-f20/20/x86_64/other_db    
[root@petunia ~]# cp -a /etc/dirsrv /root/
[root@petunia ~]# yum upgrade "389-ds*"
Loaded plugins: langpacks, refresh-packagekit
Resolving Dependencies
--> Running transaction check
---> Package 389-ds-base.x86_64 0:1.3.2.16-1.fc20 will be updated
---> Package 389-ds-base.x86_64 0:1.3.2.19-1.fc20 will be an update
---> Package 389-ds-base-libs.x86_64 0:1.3.2.16-1.fc20 will be updated
---> Package 389-ds-base-libs.x86_64 0:1.3.2.19-1.fc20 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================================================
 Package                                         Arch                                  Version                                         Repository                                        Size
==============================================================================================================================================================================================
Updating:
 389-ds-base                                     x86_64                                1.3.2.19-1.fc20                                 nhosoi-389-ds-f20                                1.6 M
 389-ds-base-libs                                x86_64                                1.3.2.19-1.fc20                                 nhosoi-389-ds-f20                                562 k

Transaction Summary
==============================================================================================================================================================================================
Upgrade  2 Packages

Total download size: 2.1 M
Is this ok [y/d/N]: y
....


[root@petunia ~]# diff -uNrp /root/dirsrv /etc/dirsrv | grep replSchema
[root@petunia ~]# 

[root@petunia ~]# systemctl status dirsrv 
dirsrv - 389 Directory Server IPA-BLACKHATS-NET-AU.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled)
   Active: failed (Result: exit-code) since Sun 2014-07-06 15:30:09 CST; 7s ago
  Process: 21278 ExecStopPost=/bin/rm -f /var/run/dirsrv/slapd-%i.pid (code=exited, status=0/SUCCESS)
  Process: 21256 ExecStart=/usr/sbin/ns-slapd -D /etc/dirsrv/slapd-%i -i /var/run/dirsrv/slapd-%i.pid -w /var/run/dirsrv/slapd-%i.startpid (code=exited, status=0/SUCCESS)
 Main PID: 21257 (code=exited, status=1/FAILURE)

...

Jul 06 15:30:09 petunia.ipa.blackhats.net.au systemd[1]: dirsrv: main process exited, code=exited, status=1/FAILURE
Jul 06 15:30:09 petunia.ipa.blackhats.net.au systemd[1]: Unit dirsrv entered failed state.
[root@petunia ~]# mv /root/dirsrv /root/dirsrv.preupgrade
[root@petunia ~]# cp -a /etc/dirsrv /root/dirsrv.postupgrade
[root@petunia ~]# cd /etc/dirsrv/
[root@petunia dirsrv]# cp schema/01core389.ldif slapd-IPA-BLACKHATS-NET-AU/schema/
cp: overwrite ‘slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif’? y
[root@petunia dirsrv]# systemctl start dirsrv
[root@petunia dirsrv]# systemctl status dirsrv 
dirsrv - 389 Directory Server IPA-BLACKHATS-NET-AU.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled)
   Active: active (running) since Sun 2014-07-06 15:31:11 CST; 21s ago

[root@petunia dirsrv]# diff -uNrp /root/dirsrv.preupgrade /etc/dirsrv | grep replSchema
+dn: cn=replSchema,cn=config
+cn: replSchema
+dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
+dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config
+dn: cn=replSchema,cn=config
+cn: replSchema
+dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
+dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config



As you can see, this was the first time I have ever upgrade to this new version of the package, I have taken pre and post upgrade copies of the dirsrv directory for other analysis, and that this issue was recreated. To solve, you can see that I needed to again, copy the 01core389.ldif by hand to my slapd instance directory.

Comment 17 Noriko Hosoi 2014-07-07 00:46:05 UTC
Puzzled...  I still cannot reproduce your upgrade problem...  I ran the command lines you put in the previous comment.

# systemctl stop dirsrv <== I stopped just one server of 2-way MMR
# yum makecache
# yum update 389-ds*

# ps -ef | egrep ns-slapd <== both 2 masters are up
nhosoi    7762     1  0 07:58 ?        00:00:00 /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-vm-042M1 -i /var/run/dirsrv/slapd-vm-042M1.pid -w /var/run/dirsrv/slapd-vm-042M1.startpid
nhosoi    7763     1  0 07:58 ?        00:00:00 /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-vm-042M0 -i /var/run/dirsrv/slapd-vm-042M0.pid -w /var/run/dirsrv/slapd-vm-042M0.startpid

And nsSchemaPolicy is found in the all 01core389.ldif.
# egrep nsSchemaPolicy /etc/dirsrv/schema/01core389.ldif /etc/dirsrv/slapd-vm-042M*/schema/01core389.ldif
/etc/dirsrv/schema/01core389.ldif:objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )
[...]

Comment 18 thierry bordaz 2014-07-07 07:23:14 UTC
I will try to reproduce with IPA installation.
I noticed a slight difference betwee Noriko and William test case.
Noriko did 'yum update' while ' William did 'yum upgrade'.

Comment 19 thierry bordaz 2014-07-07 08:56:52 UTC
I am also unable to reproduce the problem (I used 'yum upgrade'):

On F20
------
uname -a
Linux vm-061.idm.lab.bos.redhat.com 3.13.10-200.fc20.x86_64 #1 SMP Mon Apr 14 20:34:16 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Install freeipa 3.3.5 (DS 1.3.2.16)
---------------------
yum install freeipa-server
	...
	Install  1 Package (+5 Dependent packages)

	Total download size: 4.4 M
	Installed size: 16 M
	Is this ok [y/d/N]: y
	Downloading packages:
	(1/6): 389-ds-base-libs-1.3.2.16-1.fc20.x86_64.rpm                                                           | 551 kB  00:00:00     
	(2/6): 389-ds-base-1.3.2.16-1.fc20.x86_64.rpm                                                                | 1.6 MB  00:00:00     
	(3/6): freeipa-admintools-3.3.5-1.fc20.x86_64.rpm                                                            |  47 kB  00:00:00     
	(4/6): freeipa-client-3.3.5-1.fc20.x86_64.rpm                                                                | 135 kB  00:00:00     
	(5/6): freeipa-python-3.3.5-1.fc20.x86_64.rpm                                                                | 942 kB  00:00:00     
	(6/6): freeipa-server-3.3.5-1.fc20.x86_64.rpm                                                                | 1.2 MB  00:00:00  
	...


Configure IPA
-------------
ipa-server-install -p Secret123 -a Secret123

Check replSchema in (DS 1.3.2.16 schema)
--------------------
grep -i nsSchemaPolicy /etc/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/schema/*
<empty>

add repos
---------
	[nhosoi-389-ds-f20]
	name=Copr repo for 389-ds-f20 owned by nhosoi
	baseurl=http://copr-be.cloud.fedoraproject.org/results/nhosoi/389-ds-f20/fedora-$releasever-$basearch/
	skip_if_unavailable=True
	gpgcheck=0
	enabled=1

Upgrade 389-ds to 1.3.2.19
--------------------------
systemctl stop dirsrv
vi /etc/yum.repos.d/nhosoi.repo
cp -a /etc/dirsrv /root/
yum makecache
yum upgrade "389-ds*"
Loaded plugins: auto-update-debuginfo, versionlock
Resolving Dependencies
--> Running transaction check
---> Package 389-ds-base.x86_64 0:1.3.2.16-1.fc20 will be updated
---> Package 389-ds-base.x86_64 0:1.3.2.19-1.fc20 will be an update
---> Package 389-ds-base-libs.x86_64 0:1.3.2.16-1.fc20 will be updated
---> Package 389-ds-base-libs.x86_64 0:1.3.2.19-1.fc20 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                          Arch                   Version                            Repository                         Size
====================================================================================================================================
Updating:
 389-ds-base                      x86_64                 1.3.2.19-1.fc20                    nhosoi-389-ds-f20                 1.6 M
 389-ds-base-libs                 x86_64                 1.3.2.19-1.fc20                    nhosoi-389-ds-f20                 562 k

Transaction Summary
====================================================================================================================================
Upgrade  2 Packages

Total download size: 2.1 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): 389-ds-base-libs-1.3.2.19-1.fc20.x86_64.rpm                                                           | 562 kB  00:00:01     
(2/2): 389-ds-base-1.3.2.19-1.fc20.x86_64.rpm                                                                | 1.6 MB  00:00:01     
------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                               1.3 MB/s | 2.1 MB  00:00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : 389-ds-base-libs-1.3.2.19-1.fc20.x86_64                                                                          1/4 
  Updating   : 389-ds-base-1.3.2.19-1.fc20.x86_64                                                                               2/4 
  Cleanup    : 389-ds-base-1.3.2.16-1.fc20.x86_64                                                                               3/4 
  Cleanup    : 389-ds-base-libs-1.3.2.16-1.fc20.x86_64                                                                          4/4 
  Verifying  : 389-ds-base-libs-1.3.2.19-1.fc20.x86_64                                                                          1/4 
  Verifying  : 389-ds-base-1.3.2.19-1.fc20.x86_64                                                                               2/4 
  Verifying  : 389-ds-base-libs-1.3.2.16-1.fc20.x86_64                                                                          3/4 
  Verifying  : 389-ds-base-1.3.2.16-1.fc20.x86_64                                                                               4/4 

Updated:
  389-ds-base.x86_64 0:1.3.2.19-1.fc20                           389-ds-base-libs.x86_64 0:1.3.2.19-1.fc20                          

Complete!


New schema files have been copied
---------------------------------
diff -uNrp /root/dirsrv /etc/dirsrv | grep -i nsSchemaPolicy
+objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )
+objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )


diff /root/dirsrv/schema/01core389.ldif /etc/dirsrv/schema/01core389.ldif
156a157,160
> attributeTypes: ( 2.16.840.1.113730.3.1.2165 NAME 'schemaUpdateObjectclassAccept' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
> attributeTypes: ( 2.16.840.1.113730.3.1.2166 NAME 'schemaUpdateObjectclassReject' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
> attributeTypes: ( 2.16.840.1.113730.3.1.2167 NAME 'schemaUpdateAttributeAccept' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
> attributeTypes: ( 2.16.840.1.113730.3.1.2168 NAME 'schemaUpdateAttributeReject' DESC 'Netscape defined attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Netscape Directory Server' )
174a179
> objectClasses: ( 2.16.840.1.113730.3.2.328 NAME 'nsSchemaPolicy' DESC 'Netscape defined objectclass' SUP top  MAY ( cn $ schemaUpdateObjectclassAccept $ schemaUpdateObjectclassReject $ schemaUpdateAttributeAccept $ schemaUpdateAttributeReject) X-ORIGIN 'Netscape Directory Server' )


389-ds can start
----------------
systemctl status dirsrv
dirsrv - 389 Directory Server IDM-LAB-BOS-REDHAT-COM.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled)
   Active: inactive (dead)

Jul 07 04:15:02 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 2
Jul 07 04:15:02 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 3
Jul 07 04:15:02 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 1
Jul 07 04:15:02 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 2
Jul 07 04:15:02 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 3
Jul 07 04:15:05 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 1
Jul 07 04:15:06 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 2
Jul 07 04:15:06 vm-061.idm.lab.bos.redhat.com ns-slapd[4369]: GSSAPI server step 3
Jul 07 04:26:41 vm-061.idm.lab.bos.redhat.com systemd[1]: Stopping 389 Directory Server IDM-LAB-BOS-REDHAT-COM....
Jul 07 04:26:42 vm-061.idm.lab.bos.redhat.com systemd[1]: Stopped 389 Directory Server IDM-LAB-BOS-REDHAT-COM..

systemctl start dirsrv

Schema policies entries have been created
-----------------------------------------
ldapsearch -h localhost -p 389 -D "cn=directory manager" -w Secret123  -b "cn=replSchema,cn=config" -LLL
dn: cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: replSchema

dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: consumerUpdatePolicy
schemaUpdateObjectclassAccept: printer-uri-oid
schemaUpdateAttributeAccept: 2.16.840.1.113730.3.1.2110

dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config
objectClass: top
objectClass: nsSchemaPolicy
cn: supplierUpdatePolicy
schemaUpdateObjectclassAccept: printer-uri-oid
schemaUpdateAttributeAccept: 2.16.840.1.113730.3.1.2110


mv /root/dirsrv /root/dirsrv.preupgrade
cp -a /etc/dirsrv /root/dirsrv.postupgrade
diff -uNrp /root/dirsrv.preupgrade /etc/dirsrv | grep replSchema
+dn: cn=replSchema,cn=config
+cn: replSchema
+dn: cn=consumerUpdatePolicy,cn=replSchema,cn=config
+dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config
+dn: cn=replSchema,cn=config
+cn: replSchema
+dn: cn=supplierUpdatePolicy,cn=replSchema,cn=config

Comment 20 thierry bordaz 2014-07-07 12:53:07 UTC
I succeeded to reproduce a similar problem, if prior to upgrade I modify the schema file /etc/dirsrv/schema/01core389.ldif.

In that case, upgrade creates a 'rpmnew' file:
[root@vm-061 schema]# ls -l 01core389.ldif*
-r--r-----. 1 dirsrv dirsrv 28865 Jul  7 08:45 01core389.ldif
-r--r-----. 1 dirsrv dirsrv 29741 Jul  7 08:45 01core389.ldif.rpmnew
[root@vm-061 schema]# ls -l /etc/dirsrv/schema/01core389.ldif*
-rw-r--r--. 1 root root 28865 Jul  7 08:45 /etc/dirsrv/schema/01core389.ldif
-rw-r--r--. 1 root root 29741 Jul  5 20:54 /etc/dirsrv/schema/01core389.ldif.rpmnew


and starting it creates:

[07/Jul/2014:08:47:26 -0400] - Entry "cn=replSchema,cn=config" has unknown object class "nsSchemaPolicy"
[07/Jul/2014:08:47:26 -0400] NSMMReplicationPlugin - Warning: unable to create configuration entry cn=replSchema, cn=config: Object class violation
[07/Jul/2014:08:47:26 -0400] - Failed to start object plugin Multimaster Replication Plugin
[07/Jul/2014:08:47:26 -0400] - Entry "cn=replSchema,cn=config" has unknown object class "nsSchemaPolicy"
[07/Jul/2014:08:47:26 -0400] NSMMReplicationPlugin - Warning: unable to create configuration entry cn=replSchema, cn=config: Object class violation
[07/Jul/2014:08:47:26 -0400] - Failed to start object plugin Multimaster Replication Plugin
[07/Jul/2014:08:47:26 -0400] - Entry "cn=replSchema,cn=config" has unknown object class "nsSchemaPolicy"
[07/Jul/2014:08:47:26 -0400] NSMMReplicationPlugin - Warning: unable to create configuration entry cn=replSchema, cn=config: Object class violation
[07/Jul/2014:08:47:26 -0400] - Failed to start object plugin Multimaster Replication Plugin
[07/Jul/2014:08:47:26 -0400] - Error: Failed to resolve plugin dependencies
[07/Jul/2014:08:47:26 -0400] - Error: object plugin Legacy Replication Plugin is not started
[07/Jul/2014:08:47:26 -0400] - Error: object plugin Multimaster Replication Plugin is not started


It looks it is the normal behaviour of an upgrade. So I do not think it is a bug.
A workaround is to revert the schema file before the upgrade or to copy the file after the upgrade

Comment 21 Noriko Hosoi 2014-07-07 16:08:21 UTC
Thierry, thank you soooo much for your investigation and analysis!!

William, did the Thierry's finding match your case?  Do you see the rpmnew file /etc/dirsrv/schema/01core389.ldif.rpmnew?

If yes, could you merge your changes into 01core389.ldif.rpmnew and replace 01core389.ldif with 01core389.ldif.rpmnew?  Then, please copy it to the each instance schema directory /etc/dirsrv/slapd-YOURID/schema.

Now, does your server start?  Once it starts, could you run consumer replica initialization, which is the original issue. ;)

Comment 22 William Brown 2014-07-08 02:23:20 UTC
Thanks for testing this: But this is not my case at all.

My 01core389.ldif is UNMODIFIED both in dirsrv/schema AND the slapd instance schema. Post upgrade there are no rpmnew files:

[root@petunia ~]# cd /etc/dirsrv/
[root@petunia dirsrv]# find . | grep -i rpmnew
[root@petunia dirsrv]# 

Additionally, upgrade versus update only changes obsoletes handling in yum. This wouldn't affect or cause this issue.

Did you setup a replica pair in your environment? Perhaps that's related ... 

Anyway, I have a third ipa dev system I can test this on if needed .... However from my end, it seems quite reproducible. 

Where is the script that does the move / deploy of the core ldif file? Is it part of the rpm?

Comment 23 Noriko Hosoi 2014-07-08 02:35:00 UTC
Yes, it is part of rpm.  You can find it here.

$ rpm -qf /usr/share/dirsrv/updates/60upgradeschemafiles.pl
389-ds-base-1.3.2.19-1.fc20.x86_64

Comment 24 thierry bordaz 2014-07-08 09:48:42 UTC
Regarding the schema issue

I tested with a single replica. I will test two replicas.
How did you do the upgrade. Did you stop both instances, then upgraded both, then restart both ? or did you upgrade (stop/upgrade/start) one server then the second ?

Do you mind to redo the upgrade process and attach the schema files before and after the upgrade.

thanks

Comment 25 William Brown 2014-07-08 13:04:54 UTC
> I tested with a single replica. I will test two replicas.
> How did you do the upgrade. Did you stop both instances, then upgraded both,
> then restart both ? or did you upgrade (stop/upgrade/start) one server then
> the second ?

I have two systems, A and B. The standard IPA upgrade is to upgrade a single node at a time. Since this original issue was that site B was unable to start, I was running A and not B, so I upgraded B and noticed the schema issue. Once B was upgraded and replicated, I then stopped and upgraded A (again, causing the issue)

> 
> Do you mind to redo the upgrade process and attach the schema files before
> and after the upgrade.

I don't need to redo it: I had the presence of mind to snapshot these directories before upgrade, post upgrade and post work around. What specific files do you want? Just 01core389.ldif? Here is a list of the files checksums at the least ...

PRE
3ac3013e14d6ab6036329127bbf60fae5c3a6ae4f04e9a363b127564f4698ff1  dirsrv.preupgrade/schema/01core389.ldif
3ac3013e14d6ab6036329127bbf60fae5c3a6ae4f04e9a363b127564f4698ff1  dirsrv.preupgrade/slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif
POST
b522db1d347f8b1decc5f7b047cb94c2f84dbb59843fa30b17c6a0c580b90cee  dirsrv.postupgrade/schema/01core389.ldif
3ac3013e14d6ab6036329127bbf60fae5c3a6ae4f04e9a363b127564f4698ff1  dirsrv.postupgrade/slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif
CURRENT (IE WITH FIX)
b522db1d347f8b1decc5f7b047cb94c2f84dbb59843fa30b17c6a0c580b90cee  /etc/dirsrv/schema/01core389.ldif
b522db1d347f8b1decc5f7b047cb94c2f84dbb59843fa30b17c6a0c580b90cee  /etc/dirsrv/slapd-IPA-BLACKHATS-NET-AU/schema/01core389.ldif

Please note: 
* I have not altered 01core389.ldif before the upgrade.
* It is not upgraded or altered post upgrade.
* It matches the updated file once I complete the work around (IE to copy it to the slapd instance)

Comment 26 thierry bordaz 2014-07-08 14:23:30 UTC
Thanks William for these info.
So far I fail to reproduce with Server-replica topology but I will retry as I missed one step during config.

I wanted to confirm the checksum (md5sum) on my box, before the upgrade I have:

md5sum /etc/dirsrv/slapd-xxx/schema/01core389.ldif /etc/dirsrv/schema/01core389.ldif 
3572a7a36dc2b12a55683673cfc2c9ba  /etc/dirsrv/slapd-xxx/schema/01core389.ldif
3572a7a36dc2b12a55683673cfc2c9ba  /etc/dirsrv/schema/01core389.ldif

rpm -qa | grep 389-ds
389-ds-base-libs-1.3.2.16-1.fc20.x86_64
389-ds-base-devel-1.3.2.16-1.fc20.x86_64
389-ds-base-1.3.2.16-1.fc20.x86_64

Where on your platform you have checksum:
3ac3013e14d6ab6036329127bbf60fae5c3a6ae4f04e9a363b127564f4698ff1  

Are you using 'md5sum' ?

Comment 27 William Brown 2014-07-08 14:46:53 UTC
> md5sum /etc/dirsrv/slapd-xxx/schema/01core389.ldif
> /etc/dirsrv/schema/01core389.ldif 
> 3572a7a36dc2b12a55683673cfc2c9ba  /etc/dirsrv/slapd-xxx/schema/01core389.ldif
> 3572a7a36dc2b12a55683673cfc2c9ba  /etc/dirsrv/schema/01core389.ldif

> 
> Where on your platform you have checksum:
> 3ac3013e14d6ab6036329127bbf60fae5c3a6ae4f04e9a363b127564f4698ff1  
> 
> Are you using 'md5sum' ?

sha256sum dirsrv.postupgrade/schema/01core389.ldif

Comment 28 thierry bordaz 2014-07-08 16:11:41 UTC
Right, I had also the same checksum in my preinstall. So before upgrade 01core389.ldif was not modified.

I still fail to reproduce. Here are my steps:
Host A - F20 - Freeipa 3.3.5.1 - 389-DS 1.3.2.16
Host B - F20 - Freeipa 3.3.5.1 - 389-DS 1.3.2.16

On Host A
 - ipa-server-install
 - ipa-prepare-replica <host_B>

On Host B
 - ipa-replica-install --setup-ca <host_B>

On Host A
 - stop dirsrv
 - enable 1.3.2.19 repos
 - yum upgrade "389-ds" 
 - 01core389.ldif is updated (/etc/dirsrv/schema and under the instance)
 - Start dirsrv
 - Creation of the replschemapolicy 

Would you check if anything differs from what you did ?

Comment 29 William Brown 2014-07-08 23:10:52 UTC
I have a full transcript of what I did above: But from reading what you say I can't see any difference.

What strikes me is that it wasn't a single server, but multiple that I could produce this on. I'll investigate further and raise a new bug if I can isolate it.

Otherwise, the original bug is resolved. If you have a bodhi build I'm happy to review it.

Comment 30 Noriko Hosoi 2014-07-09 18:23:48 UTC
Hi William,

I've sent out a 1.3.2.19 release announcement.  If you could review:
https://admin.fedoraproject.org/updates/389-ds-base-1.3.2.19-1.fc20
I'd greatly appreciate it.

Let me close this bug for now.  If you run into any problem, please feel free to reopen it.

Thank you so much for your help.
--noriko

Comment 31 Tomas Babej 2014-07-21 11:18:38 UTC
I also ran into upgrade problem on my home LDAP server (pure 389, no IPA). The 01core389.ldif was not modified before the upgrade, there is no .rpmnew file. 

However, the 01core389.ldif is not upgraded, and replacing it by the copy from the rpm does resolve the issue.

In my case, it was upgrade from 1.3.2-16 to 1.3.2-19 version.

Comment 32 Rich Megginson 2014-07-21 17:16:54 UTC
(In reply to Tomas Babej from comment #31)
> I also ran into upgrade problem on my home LDAP server (pure 389, no IPA).
> The 01core389.ldif was not modified before the upgrade, there is no .rpmnew
> file. 
> 
> However, the 01core389.ldif is not upgraded, and replacing it by the copy
> from the rpm does resolve the issue.
> 
> In my case, it was upgrade from 1.3.2-16 to 1.3.2-19 version.

So do we have a reproducer now?  Should we reopen this bug?

Comment 33 Noriko Hosoi 2014-07-21 17:53:49 UTC
(In reply to Rich Megginson from comment #32)
> (In reply to Tomas Babej from comment #31)
> > I also ran into upgrade problem on my home LDAP server (pure 389, no IPA).
> > The 01core389.ldif was not modified before the upgrade, there is no .rpmnew
> > file. 
> > 
> > However, the 01core389.ldif is not upgraded, and replacing it by the copy
> > from the rpm does resolve the issue.
> > 
> > In my case, it was upgrade from 1.3.2-16 to 1.3.2-19 version.
> 
> So do we have a reproducer now?  Should we reopen this bug?

Well, this bug is for segfault.  And the failure of upgrading schema files was found in the bug verification.  I'd prefer to have a new bug for the schema upgrade...

And it'd be great to have a reproducer (some conditiona to make it happen).  So far dev nor qe hasn't successfully reproduced it yet...

Comment 34 Rich Megginson 2014-07-21 18:15:17 UTC
(In reply to Noriko Hosoi from comment #33)
> (In reply to Rich Megginson from comment #32)
> > (In reply to Tomas Babej from comment #31)
> > > I also ran into upgrade problem on my home LDAP server (pure 389, no IPA).
> > > The 01core389.ldif was not modified before the upgrade, there is no .rpmnew
> > > file. 
> > > 
> > > However, the 01core389.ldif is not upgraded, and replacing it by the copy
> > > from the rpm does resolve the issue.
> > > 
> > > In my case, it was upgrade from 1.3.2-16 to 1.3.2-19 version.
> > 
> > So do we have a reproducer now?  Should we reopen this bug?
> 
> Well, this bug is for segfault.  And the failure of upgrading schema files
> was found in the bug verification.  I'd prefer to have a new bug for the
> schema upgrade...
> 
> And it'd be great to have a reproducer (some conditiona to make it happen). 
> So far dev nor qe hasn't successfully reproduced it yet...

Ok.  Tomas, if you can reproduce the schema bug, please open a new bug for that issue.


Note You need to log in before you can comment on or make changes to this bug.