Bug 1267035

Summary: Multiple ceph-osd daemons get started
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kyle Squizzato <ksquizza>
Component: RADOSAssignee: Kefu Chai <kchai>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.2.3CC: ceph-eng-bugs, dzafman, flucifre, kchai, kdreyer, sjust
Target Milestone: rc   
Target Release: 1.3.3   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-02-01 05:15:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Kyle Squizzato 2015-09-28 21:21:13 UTC
Description of problem:
This may be a possible race condition in the starting of the ceph-osd daemons.  From what I can see

 * When we start an osd daemon we create a pid file under /var/run/ceph 
 * The initscript uses the pid file to prevent second starts of the daemon and kill the pid file during stop sequences 
 * The script then performs a mounting process to mount the data store, during this phase if the pid file was not created quick enough in the above step the daemon may actually get started a second time.  

Version-Release number of selected component (if applicable):
ceph 0.80.9 

How reproducible:
Rare

Steps to Reproduce:
I haven't been able to reproduce, but http://tracker.ceph.com/issues/13238 states "this problem can be reproduced easily if we add some sleep, like a second or so, before pidfile_write in global_init_postfork_start."

Actual results:
Duplicate ceph-osd daemons get started and result in "lock_fsid failed to lock" messages in logs

Expected results:
Only one ceph-osd should get started

Comment 1 Kyle Squizzato 2015-10-08 21:02:17 UTC
It appears that this behavior does not exist in hammer and was fixed there

Comment 2 Ken Dreyer (Red Hat) 2015-10-19 15:53:47 UTC
From a recent comment in the upstream ticket, this might not be fixed in Hammer after all. (It's not clear to me what a proper fix in /etc/init.d/ceph would look like)

Comment 5 Kefu Chai 2016-02-01 05:15:22 UTC
BZ#1299409 is a dup lof

*** This bug has been marked as a duplicate of bug 1299409 ***