Description of problem: While installing the self-hosted engine on RHVH 4.3, it is failing with the below error:- [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "internal error: Network is already in use by interface eth0"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook . . . [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20190516122924-eys1ej.log Version-Release number of selected component (if applicable): RHVH 4.3 RHV-M Appliance 4.3 How reproducible: Always Steps to Reproduce: 1. Install RHVH 4.3 2. Deploy self-hosted engine using deployment script (hosted-engine --deploy) 3. Enter the details requested, such as ova file, Datacenter name, cluster name, etc given 4. Followed the steps given in the below product doc: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/index#Installing_the_Red_Hat_Virtualization_Manager_SHE_cli_deploy Actual results: Getting failed with below error:- [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "internal error: Network is already in use by interface eth0"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook Expected results: Deploy self-hosted engine successfully. Additional info: Tried reproducing this error with the previous version by installing the RHVH 4.2 & RHV-M appliance 4.2, it is deploying self-hosted engine successfully. To reproduce the issue with RHV 4.3: Downloaded the RHVH & RHV-M appliance file from > https://access.redhat.com/downloads/content/415/ver=4.3/rhel---7/4.3/x86_64/product-software OVA file name:- RHV-M Appliance for RHV 4.3 Async & RHV-M Appliance for RHV 4.3 RHVH filename: Hypervisor Image for RHV 4.3 Async & Hypervisor Image for RHV 4.3
Created attachment 1569442 [details] sosreport
Its RHVH specific, I don't see this happening on RHEL7.7 with 4.3.5 or on RHEL7.6 with 4.3.4. Moving to RHVH team.
QE will verify it until getting new build.
According to https://bugzilla.redhat.com/show_bug.cgi?id=1698643#c3 Need it to be test with a VM node, not a baremetal node?
Yes, you have to test it over a VM (with nested virtualization support) created with virt-manager or similar and attached to the default libvirt network so that the addresses of the two virtual networks are going to clash.
(In reply to Simone Tiraboschi from comment #9) > Yes, you have to test it over a VM (with nested virtualization support) > created with virt-manager or similar and attached to the default libvirt > network so that the addresses of the two virtual networks are going to clash. Thanks for your reply in your vocation, Simone. It sounds a little complex. I will try with your comment. We have synchronized the information in our meeting today.
Test Version(Download from https://access.redhat.com/downloads/content/415/ver=4.3/rhel---7/4.3/x86_64/product-software): RHVH-4.3-20190418.4-RHVH-x86_64-dvd1.iso rhvm-appliance-4.3-20190429.0.el7.ova Test steps: According to comment 0 and comment 9 1. Enable nested virtualization of a physical host(Fedora28) 2. Create a virtual network with 192.168.122.1/24 IP space. 3. Create vm installed RHVH-4.3-20190418.4-RHVH-x86_64-dvd1.iso with virt-manager 4. Config vm detail to enable VT-x of vm cpu 5. Deploy hosted engine with CLI Result: QE reproduce this issue.
Test Version RHVH-4.3-20190620.7-RHVH-x86_64-dvd1.iso cockpit-system-195-1.el7.noarch cockpit-195-1.el7.x86_64 cockpit-bridge-195-1.el7.x86_64 cockpit-ws-195-1.el7.x86_64 cockpit-machines-ovirt-195-1.el7.noarch cockpit-dashboard-195-1.el7.x86_64 cockpit-storaged-195-1.el7.noarch cockpit-ovirt-dashboard-0.13.2-2.el7ev.noarch ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch ovirt-ansible-hosted-engine-setup-1.0.21-1.el7ev.noarch rhvm-appliance-4.3-20190620.0.el7.rpm Test Steps: According to comment 11 Result: There is error message "internal error: Network is already in use by interface ens3" (Result picture and logs have been attached) Bug can be reproduced, change status to "ASSIGNED"
Created attachment 1585137 [details] picture
Created attachment 1585138 [details] log files
Test with RHVH-4.2-20190618.1-RHVH-x86_64-dvd1.iso and rhvm-appliance-4.3-20190710.2.el7.rpm, the bug still can be reproduced. Move to "ASSIGNED"
Moving to 4.3.6: this is definitively not a blocker hitting just a specific corner case.
I will verify it ASAP
Test RHVH-4.3-20190801.2-RHVH-x86_64-dvd1.iso, no error message "internal error: Network is already in use by interface ens3" displays. The bug is fixed, change bug status to "VERIFIED"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3027