Description of problem: Use upstream terraform script to install upi on vsphere, during installation, after bootstrap complete, bootstrap server will be removed, then update haproxy.conf on lb server to delete bootstrap ip there and restart haproxy service, but failed, got below error: [root@lb-0 ~]# systemctl status haproxy.service ● haproxy.service - haproxy Loaded: loaded (/etc/systemd/system/haproxy.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2020-08-19 07:56:17 UTC; 10s ago Process: 2867 ExecStart=/bin/podman run --name haproxy --net=host --privileged --entrypoint=/usr/sbin/haproxy -v /etc/haproxy/haproxy.conf:/var/lib/haproxy/conf/haproxy.conf:Z quay.io/ope> Process: 2826 ExecStartPre=/bin/podman pull quay.io/openshift/origin-haproxy-router (code=exited, status=0/SUCCESS) Process: 2783 ExecStartPre=/bin/podman rm haproxy (code=exited, status=2) Process: 2736 ExecStartPre=/bin/podman kill haproxy (code=exited, status=125) Main PID: 2867 (code=exited, status=125) Aug 19 07:56:17 lb-0 podman[2826]: Copying blob sha256:2386bb240ba67844adddaf64a980429491335a872ae8028a9eb65d5130077199 Aug 19 07:56:17 lb-0 podman[2826]: Copying blob sha256:fc5b206e9329a1674dd9e8efbee45c9be28d0d0dcbabba3c6bb67a2f22cfcf2a Aug 19 07:56:17 lb-0 podman[2826]: Copying config sha256:b6075710f0ddd6810f6391d6e5d13f7af218ab7343e321b90511e5efe95bcc11 Aug 19 07:56:17 lb-0 podman[2826]: Writing manifest to image destination Aug 19 07:56:17 lb-0 podman[2826]: Storing signatures Aug 19 07:56:17 lb-0 podman[2826]: b6075710f0ddd6810f6391d6e5d13f7af218ab7343e321b90511e5efe95bcc11 Aug 19 07:56:17 lb-0 systemd[1]: Started haproxy. Aug 19 07:56:17 lb-0 podman[2867]: Error: error creating container storage: the container name "haproxy" is already in use by "fd24b162c52941676c02cf3830f35b017ccb2d0b71fea1f21764dfb70624d9> Aug 19 07:56:17 lb-0 systemd[1]: haproxy.service: Main process exited, code=exited, status=125/n/a Aug 19 07:56:17 lb-0 systemd[1]: haproxy.service: Failed with result 'exit-code'. Issue is reproduced starting from 4.4 where lb server is introduced in upstream terraform scripts. As workaround, adding below line in haproxy.service, it can be worked well. ExecStop=/bin/podman rm -f haproxy Version-Release number of the following components: latest 4.6 nightly build How reproducible: Always Steps to Reproduce: 1. Install upi ocp on vsphere 2. 3. Actual results: Haproxy service failed to be restarted Expected results: Haproxy service can be restarted. Additional info: Please attach logs from ansible-playbook with the -vvv flag
*** Bug 1867120 has been marked as a duplicate of this bug. ***
Since the modification have already been added in QE script, and it's successful to install OCP with this modification. So move bug to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196
*** Bug 1867868 has been marked as a duplicate of this bug. ***