Description of problem: After the Haproxy container (from Haproxy static pod) being recreated once (for example due to liveness probe failure), the haproxy events don't log in container logs. (being restarted once. In current code, after Haproxy container being restarted(e.g: due to Liveness probe); we'll hit [1] error. [1]- + socat UNIX-RECV:/var/run/haproxy/haproxy-log.sock STDOUT 2019/08/14 07:21:03 socat[8] E "/var/run/haproxy/haproxy-log.sock" exists Version-Release number of selected component (if applicable): How reproducible: 1. Delete haproxy container in one of the master nodes 2. Check the logs of the new container Steps to Reproduce: 1. 2. 3. Actual results: From container log: [1]- + socat UNIX-RECV:/var/run/haproxy/haproxy-log.sock STDOUT 2019/08/14 07:21:03 socat[8] E "/var/run/haproxy/haproxy-log.sock" exists Expected results: See all haproxy events in container logs Additional info:
Checked with 4.2.0-0.nightly-2019-09-25-214437, and it's fixed, so move to verified. sh-4.4# crictl ps CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID 0cdfe9f3e45b2 71ecefbf68f5aa34388084e3f534d748cec16e8c060b1f3447c8786e67f78e79 10 seconds ago Running haproxy 6 1730c18231ae6 sh-4.4# crictl logs 0cdfe9f3e45b2 + declare -r haproxy_sock=/var/run/haproxy/haproxy-master.sock + declare -r haproxy_log_sock=/var/run/haproxy/haproxy-log.sock + export -f msg_handler + export -f reload_haproxy + rm -f /var/run/haproxy/haproxy-master.sock /var/run/haproxy/haproxy-log.sock + '[' -s /etc/haproxy/haproxy.cfg ']' + socat UNIX-RECV:/var/run/haproxy/haproxy-log.sock STDOUT + socat UNIX-LISTEN:/var/run/haproxy/haproxy-master.sock,fork 'system:bash -c msg_handler' + /usr/sbin/haproxy -W -db -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/run/haproxy.pid <133>Sep 26 09:01:51 haproxy[9]: Proxy main started. <133>Sep 26 09:01:51 haproxy[9]: Proxy health_check_http_url started. <133>Sep 26 09:01:51 haproxy[9]: Proxy stats started. <133>Sep 26 09:01:51 haproxy[9]: Proxy masters started. <133>Sep 26 09:01:51 haproxy[10]: Health check for server masters/etcd-0.share-0919a.qe.rhcloud.com. succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 6ms, status: 2/2 UP. <133>Sep 26 09:01:52 haproxy[10]: Health check for server masters/etcd-2.share-0919a.qe.rhcloud.com. succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 6ms, status: 2/2 UP. <134>Sep 26 09:01:52 haproxy[10]: Connect from 127.0.0.1:59878 to 127.0.0.1:7443 (main/TCP) <133>Sep 26 09:01:53 haproxy[10]: Health check for server masters/etcd-1.share-0919a.qe.rhcloud.com. succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 7ms, status: 2/2 UP. <134>Sep 26 09:02:04 haproxy[10]: Connect from 192.168.0.20:50506 to 192.168.0.20:50936 (health_check_http_url/HTTP) The client send: reload <134>Sep 26 09:02:08 haproxy[10]: Connect from 127.0.0.1:60116 to 127.0.0.1:7443 (main/TCP) [WARNING] 268/090208 (16) : Failed to connect to the old process socket '/var/lib/haproxy/run/haproxy.sock' [ALERT] 268/090208 (16) : Failed to get the sockets from the old process! <133>Sep 26 09:02:08 haproxy[16]: Proxy main started. <133>Sep 26 09:02:08 haproxy[16]: Proxy health_check_http_url started. <133>Sep 26 09:02:08 haproxy[16]: Proxy stats started. <133>Sep 26 09:02:08 haproxy[16]: Proxy masters started. <132>Sep 26 09:02:08 haproxy[10]: Stopping frontend main in 0 ms. <132>Sep 26 09:02:08 haproxy[10]: Stopping proxy health_check_http_url in 0 ms. [WARNING] 268/090151 (9) : Exiting Master process... <132>Sep 26 09:02:08 haproxy[10]: Stopping proxy stats in 0 ms. <132>Sep 26 09:02:08 haproxy[10]: Stopping backend masters in 0 ms. <132>Sep 26 09:02:08 haproxy[10]: Proxy main stopped (FE: 2 conns, BE: 0 conns). <132>Sep 26 09:02:08 haproxy[10]: Proxy health_check_http_url stopped (FE: 1 conns, BE: 0 conns). <132>Sep 26 09:02:08 haproxy[10]: Proxy stats stopped (FE: 0 conns, BE: 0 conns). <132>Sep 26 09:02:08 haproxy[10]: Proxy masters stopped (FE: 0 conns, BE: 2 conns). <133>Sep 26 09:02:08 haproxy[17]: Health check for server masters/etcd-2.share-0919a.qe.rhcloud.com. succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 5ms, status: 2/2 UP. <133>Sep 26 09:02:09 haproxy[17]: Health check for server masters/etcd-1.share-0919a.qe.rhcloud.com. succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 7ms, status: 2/2 UP.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922