Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause:
Unfortunate design of application server used in luci that
starts initializing the application in question while the
original initscript-launched process terminates, making it
hard to account for later errors in the final status of
initscript execution.
Consequence:
Application configuration errors are not reflected in the
initscript execution outcome. More specifically, luci
is indicated as running upon "service luci start" while
in fact it was the case just a moment before it ultimately
fails.
Fix:
Initscript is granted two explicit graceful periods
(for PID file being created and subsequently for not
disappearing) of 1 second in which the real outcome is to
be decided if not already. This timeout is configured in
/etc/sysconfig/luci via PID_FILE_WAIT configuration item,
and the script will automatically complain whenever it is
found insufficient.
Result:
No case of marking failed start of luci service as success
should happen anymore. At worst, one is warned that the
graceful wait period is likely not enough in particular
deployment, which can be resolved easily.
Description of problem:
With invalid config file (/var/lib/luci/etc/luci.ini) running luci with
"service luci start" reports that it has started when it has not.
Version-Release number of selected component (if applicable):
luci-0.26.0-13.el6.x86_64
How reproducible: always
Steps to Reproduce:
1. edit /var/lib/luci/etc/luci.ini and do something syntactically incorect
(change uppercase to lowercase for example) and save that file
2. service luci start
3. check that luci is not running
Actual results:
init script reports that luci has started correctly but it has not and is not
running
Expected results:
report that there has been error trying to run luci
Additional info:
It can be seen in luci log file that it has not been started because of parsing
error in luci.ini which probably means that init script is not properly
checking return value or that this value is incorrecly reported by python
itself.
1) luci is not running.
$ service luci restart
Stop luci... [FAILED]
Start luci... [ OK ]
Point your web browser to https://rhel63-02:8084 (or equivalent) to access luci
$ lsof -i :8084
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python 18581 luci 5u IPv4 9943462 0t0 TCP *:8084 (LISTEN)
2) breaking luci config file
$ vim /var/lib/luci/etc/luci.ini
$ service luci restart
Stop luci... [ OK ]
Start luci... [ OK ]
Point your web browser to https://rhel63-02:8084 (or equivalent) to access luci
$ lsof -i :8084
$
Comment 2Jan Pokorný [poki]
2012-11-19 12:54:06 UTC
This is cause because "serve" command in Paste's (the WSGI server
used by luci) daemon mode will spawn a child process and exit
successfully (initscript observes success), and it is this dettached
child which actually uses the base configuration file (via loadserver
and loadapp methods).
Cf. /usr/lib/python*/site-packages/paste/script/serve.py
But agreed this is suboptimal and there might be an additional check if
luci did not bail out due to a problem like this.
Please be aware that the base config file (/var/lib/luci/etc/luci.ini)
is not publicly exposed (those warnings) and additionally, it is generated
by initscript on-the-fly when missing. Hence, solution is easy: just
remove that file and restart luci.
Comment 3Jan Pokorný [poki]
2012-11-19 13:25:27 UTC
Comment 4Jan Pokorný [poki]
2012-11-19 14:00:18 UTC
Re [comment 3]:
TODO:
- refactor success/failure functions and print the final verdict only
after it is known that startup has indeed succeeded
(the only effect of the patch so far is to return correct exit code,
not to a proper announcement towards user)
- restart action should also include the deferred check whether luci
is running
Comment 14Jan Pokorný [poki]
2013-08-13 13:20:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2017-0766.html