Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionGabriele Cerami
2016-11-04 14:34:58 UTC
Description of problem:
In some cases, even when haproxy fails to start, the systemd wrapper will report an exit code that will be interpreted as a success.
Version-Release number of selected component (if applicable):
Version : 1.5.14
Release : 3.el7
How reproducible:
100%
Steps to Reproduce:
1. configure haproxy to listen to a port
2. launch a dummy listener to the same port
3. launch haproxy and wait to fail
4. look at systemd logs to see that haproxy was reported as started successfully
Actual results:
Haproxy reported as started, but not process running
Expected results:
Haproxy reported as failed to start,
Additional info:
Log snippet where is clear that wrapper is reporting 256 as RC. 256 will be truncated to 0 by systemd because generally exit code are only 8 bit integers.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: [WARNING] 291/062528 (29398) : config : missing timeouts for proxy 'rabbitmq'.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: | While not properly invalid, you will certainly encounter various problems
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: | with such a configuration. To fix this, please ensure that all following
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: [WARNING] 291/062528 (29398) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it yo
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy aodh started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy glance_api started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy glance_registry started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy haproxy.stats started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy heat_api started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy ironic started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy ironic-inspector started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy keystone_admin started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy keystone_public started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy mistral started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy neutron started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy nova_metadata started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy nova_osapi started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy rabbitmq started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy swift_proxy_server started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy zaqar_api started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy[29398]: Proxy zaqar_ws started.
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: [ALERT] 291/062528 (29398) : Starting proxy ceilometer: cannot bind socket [192.0.2.3:8777]
Oct 18 09:25:28 tripleo-centos-7-tripleo-test-cloud-rh1-4970270 haproxy-systemd-wrapper[29397]: haproxy-systemd-wrapper: exit, haproxy RC=256
(In reply to Gabriele Cerami from comment #1)
> Bug was already discussed with upstream here:
>
> https://www.mail-archive.com/haproxy@formilux.org/msg23896.html
>
> Upstream released a patch here:
>
> http://git.haproxy.org/?p=haproxy.git;a=commit;
> h=f7659cb10cb0420c7ca06fad1067207021d2a078
>
> That should also be backported to 1.5 and 1.6 versions
We don't distribute version 1.6 in RHEL7.
Did you try the latest version (1.5.18) in brew? That rebase will be release in RHEL7.3. I will check to see if that backport is in 1.5.18 on Monday. Else this will have to wait for RHEL7.4 unless you can get the acks needed to get this into RHEL7.3.
> > That should also be backported to 1.5 and 1.6 versions
>
> We don't distribute version 1.6 in RHEL7.
Sorry, what I meant was that the fix was in place for upstream release 1.7, and should be backported soon to 1.6 and 1.5 upstream, so it may take a while to actually land in 1.5 in upstream.
> Did you try the latest version (1.5.18) in brew? That rebase will be release
> in RHEL7.3. I will check to see if that backport is in 1.5.18 on Monday.
> Else this will have to wait for RHEL7.4 unless you can get the acks needed
> to get this into RHEL7.3.
1.5.18 was tagged 5 months ago, and last changelog entry is from 21 Oct. So, no patch for this, and as I'm saying above, it may take a while to get into 1.5.(maybe)19 at this point. We'll probably have to wait 7.4
Patch backported and applied to source. With patch in place and haproxy configured to have a frontend that listens on port 80 (bind *:80 in this case), start httpd such that haproxy will fail to start due to the port already being used:
# systemctl start httpd
# /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
<7>haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
[ALERT] 320/221723 (63474) : Starting frontend mesa_vip: cannot bind socket [0.0.0.0:80]
<5>haproxy-systemd-wrapper: exit, haproxy RC=1
# echo $?
1
Previously, the "echo $?" would return 0 in this case.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2017:2170
Comment 13Red Hat Bugzilla
2023-09-14 03:33:53 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days