Bug 1262974 - upstart: make config less generous about restarts
upstart: make config less generous about restarts
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS (Show other bugs)
Unspecified Linux
unspecified Severity unspecified
: rc
: 1.2.4
Assigned To: Ken Dreyer (Red Hat)
Depends On:
  Show dependency treegraph
Reported: 2015-09-14 15:24 EDT by Samuel Just
Modified: 2017-07-30 11:07 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Prior to this update, the upstart init system would restart Ceph's daemons too frequently, up to five times in 30 seconds. This could lead to startup respawn loops that mask other issues, such as disk state problems. This update adjusts Ceph's upstart settings to restart daemons less aggressively, three times in 30 minutes.
Story Points: ---
Clone Of:
: 1262976 (view as bug list)
Last Closed: 2015-10-01 17:01:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 11798 None None None Never

  None (edit)
Description Samuel Just 2015-09-14 15:24:35 EDT
Description of problem:

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
Comment 2 Samuel Just 2015-09-14 15:27:54 EDT
Description of problem:

upstart is too generous about restarting broken daemons

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. start cluster
2. /var/log/ceph directory mount read only on a mon node
3. watch ceph-mon repeatedly restart

Actual results:

ceph-mon repeatedly restarts

Expected results:

ceph-mon repeatedly restarts for a bit, and then remains dead

Additional info:
Comment 4 Ken Dreyer (Red Hat) 2015-09-15 17:52:33 EDT
Fix will be in non-RHEL Ceph v0.80.8.5
Comment 6 Tamil 2015-09-18 17:45:54 EDT
verified and the fix works fine.

1. sudo pkill -9 -f 'ceph -i 0' - kill osd.0
2. wait for 30 seconds
3. look for upstart restarting the daemons

repeat the above steps 2 more times and then upstart will stop restarting the daemon. 

later, to bring up the osd.0, use "sudo start ceph-osd id=0".

upstart should not restart daemons, when killed more than 3 times within 30 minute time frame.
Comment 7 Tamil 2015-09-18 17:46:55 EDT
if after upgrading from rh ceph 1.2.3 to 1.2.3-2 or 1.2.3 to 1.2.3-1 to 1.2.3-2 , the fix doesnt work, reboot the cluster once and retry.
Comment 9 errata-xmlrpc 2015-10-01 17:01:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.