Created attachment 1698439 [details] hosted_engine_setup Description of problem: Deploy HE fails in case of uppercase characters (ex. 00:1A:4A:16:23:01) in the provided MAC address Works as expected with lowercase (ex. 00:1a:4a:16:23:01) Version-Release number of selected component (if applicable): ovirt-hosted-engine-setup-2.4.4-1.el8ev How reproducible: 100% Steps to Reproduce: 1. Run HE Installation and enter a MAC with uppercase characters (ex. 00:1A:4A:16:23:01), Actual results: Deployment fails in 'Mask cloud-init services to speed up future boot' Task, The HE-VM is up (without the expected MAC) and in the cloud-init logs you can see: 2020-06-22 11:28:43,579 - util.py[DEBUG]: failed stage init-local Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 653, in status_wrapper ret = functor(name, args) File "/usr/lib/python3.6/site-packages/cloudinit/cmd/main.py", line 362, in main_init init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL)) File "/usr/lib/python3.6/site-packages/cloudinit/stages.py", line 702, in apply_network_config net.wait_for_physdevs(netcfg) File "/usr/lib/python3.6/site-packages/cloudinit/net/__init__.py", line 513, in wait_for_physdevs raise RuntimeError(msg) RuntimeError: Not all expected physical devices present: {'00:1A:4A:16:23:01'} Expected results: MAC address can be provided with both lowercase/uppercase characters Additional info: This issue can be handled by converting the ansible variable to lowercase but it's WA - the real issue is probably with cloud-init, logs attached
Created attachment 1698440 [details] hosted_engine_setup_cloud_init
Created attachment 1698441 [details] hosted_engine_setup_output
Was this not happening as well on 4.3.10?
I'm not able to reproduce it, works for me when using uppercase characters in mac address. ovirt-ansible-hosted-engine-setup-1.1.6-1.el8.noarch ovirt-hosted-engine-setup-2.4.4-1.el8.noarch cloud-init-18.5-12.el8_2.2.noarch (from HE VM) Closing as worksforme.
Reopening since we encountered this issue again cloud-init-19.4-1.el8.7.noarch (from HE VM) ovirt-ansible-hosted-engine-setup-1.1.7-1.el8ev.noarch There is an open bug in cloud-init https://bugs.launchpad.net/cloud-init/+bug/1876941
Works fine on rhvm-4.4.2.6-0.2.el8ev.noarch: Running the deployment with 00:16:3E:7B:B8:53 (uppercase): You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:41:c1:1d]: 00:16:3E:7B:B8:53 How should the engine VM network be configured (DHCP, Static)[DHCP]? nsednev-he-1 ~]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.167 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::216:3eff:fe7b:b853 prefixlen 64 scopeid 0x20<link> ether 00:16:3e:7b:b8:53 txqueuelen 1000 (Ethernet) RX packets 3066 bytes 3490940 (3.3 MiB) RX errors 0 dropped 394 overruns 0 frame 0 TX packets 2102 bytes 395461 (386.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 less /var/log/cloud-init.log 2020-10-27 09:59:51,786 - stages.py[INFO]: Applying network configuration from ds bringup=False: {'version': 1, 'confi g': [{'type': 'physical', 'name': 'eth0', 'mac_address': '00:16:3e:7b:b8:53', 'subnets': [{'type': 'dhcp'}]}]} nsednev-he-1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 # Created by cloud-init on instance boot automatically, do not edit. # BOOTPROTO=dhcp DEVICE=eth0 HWADDR=00:16:3e:7b:b8:53 ONBOOT=yes STARTMODE=auto TYPE=Ethernet USERCTL=no Tested on: rhvm-4.4.2.6-0.2.el8ev.noarch ovirt-ansible-collection-1.2.0-0.3.el8ev.noarch ovirt-hosted-engine-setup-2.4.7-3.el8ev.noarch ovirt-hosted-engine-ha-2.4.5-1.el8ev.noarch Linux 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux release 8.3 (Ootpa) Deployment worked just fine over NFS using capital letters for the MAC via otopi CLI deployment procedure.
This bugzilla is included in oVirt 4.4.3 release, published on November 10th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.