Description of problem: The HostedEngine deployment process fails on a host with EPYC processor with the following error: the CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd The HostedEngineLocal VM runs fine, but uses a different CPU: HostedEngineLocal: -cpu EPYC-IBPB,x2apic=on,tsc-deadline=on,hypervisor=on,tsc_adjust=on,clwb=on,umip=on,cmp_legacy=on,monitor=off,svm=off,+kvmclock HostedEngine: -cpu EPYC,ibpb=on Version-Release number of selected component (if applicable): Host is RHEL 7.7 kernel-3.10.0-1062.9.1.el7.x86_64 qemu-kvm-rhev-2.12.0-33.el7_7.4.x86_64 libvirt-4.5.0-23.el7_7.3.x86_64 vdsm-4.30.33-1.el7ev.x86_64 rhvm-appliance-4.3-20191010.0.el7.x86_64 How reproducible: Always on customer's hardware Steps to Reproduce: 1. Hardware description: System: GIGABYTE R282-Z90-00 Motherboard: GIGABYTE MZ92-FS0-00 BIOS revision 5.14 (UEFI) Processor: AMD EPYC 7742 64-Core Processor 2. Deploy HostedEngine 3. During the deploy process, select the storage (FC) Actual results: The playbook fails when bringing up the VM in the storage domain. ~~~ [ INFO ] TASK [ovirt.hosted_engine_setup : Start ovirt-ha-broker service on the host] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Initialize lockspace volume] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Start ovirt-ha-agent service on the host] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Exit HE maintenance mode] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Check engine VM health] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.153569", "end": "2019-12-1 0 14:37:02.333353", "rc": 0, "start": "2019-12-10 14:37:02.179784", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"e xtra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2796 (Tue Dec 10 14:36:57 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_time=2796 (Tue Dec 10 14:36:57 2 019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 01:48:06 1970\\n\", \"hostname\": \"jsfos01.jusuf\", \"ho st-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \"stopped\": false, \"maintena nce\": false, \"crc32\": \"5548e669\", \"local_conf_timestamp\": 2796, \"host-ts\": 2796}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": t rue, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=2796 (Tue Dec 10 14:36:57 2019)\\nhost-id=1\\nscore=0\\nvm_conf_refresh_tim e=2796 (Tue Dec 10 14:36:57 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineUnexpectedlyDown\\nstopped=False\\ntimeout=Thu Jan 1 01:48:06 1970\\n\", \"hostna me\": \"jsfos01.jusuf\", \"host-id\": 1, \"engine-status\": {\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"down_unexpected\", \"detail\": \"Down\"}, \"score\": 0, \ "stopped\": false, \"maintenance\": false, \"crc32\": \"5548e669\", \"local_conf_timestamp\": 2796, \"host-ts\": 2796}, \"global_maintenance\": false}"]} [ INFO ] TASK [ovirt.hosted_engine_setup : Check VM status at virt level] [ INFO ] TASK [ovirt.hosted_engine_setup : Fail if engine VM is not running] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM is not running, please check vdsm logs"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] ~~~ Expected results: No problem Additional info: $ cat sos_commands/processor/lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7742 64-Core Processor Stepping: 0 CPU MHz: 2250.000 CPU max MHz: 2250.0000 CPU min MHz: 1500.0000 BogoMIPS: 4499.93 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-63 NUMA node1 CPU(s): 64-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca vdsm.log: 2019-12-11 12:09:32,708+0100 INFO (jsonrpc/0) [api.virt] START create(vmParams={u'xml': u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><uuid>7186c24a-ed88-469c-8111-79853e428378</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">Red Hat</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">7186c24a-ed88-469c-8111-79853e428378</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer name="hpet" present="no"/></clock><features><acpi/><vmcoreinfo/></features><cpu match="exact"><model>EPYC</model><feature name="ibpb" policy="require"/><feature name="virt-ssbd" policy="require"/><topology cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" memory="16777216"/></numa></cpu><cputune/><devices><input type="mouse" bus="ps2"/><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/7186c24a-ed88-469c-8111-79853e428378.ovirt-guest-agent.0"/></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/7186c24a-ed88-469c-8111-79853e428378.org.qemu.guest_agent.0"/></channel><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"/><channel name="inputs" mode="secure"/><channel name="cursor" mode="secure"/><channel name="playback" mode="secure"/><channel name="record" mode="secure"/><channel name="display" mode="secure"/><channel name="smartcard" mode="secure"/><channel name="usbredir" mode="secure"/><listen type="network" network="vdsm-ovirtmgmt"/></graphics><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias name="ua-3647db03-428d-4107-ab75-5abff66540d7"/></video><console type="unix"><source path="/var/run/ovirt-vmconsole-console/7186c24a-ed88-469c-8111-79853e428378.sock" mode="bind"/><target type="serial" port="0"/><alias name="ua-4f0f0f09-4e11-48b5-aeba-f49895db4f8a"/></console><memballoon model="virtio"><stats period="5"/><alias name="ua-66ca032f-b6b7-4f77-8dd1-d83474e6a417"/></memballoon><sound model="ich6"><alias name="ua-80d95fce-a0a1-4182-8c28-167abaacd78a"/></sound><controller type="usb" model="piix3-uhci" index="0"/><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias name="ua-91ed56a4-ef19-4a04-a3e1-01ac68b18ae5"/></controller><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"/></graphics><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-bcb1e545-9329-4346-b23d-5cf415447519"/></rng><controller type="virtio-serial" index="0" ports="16"><alias name="ua-e6747506-6c14-4afa-b7fb-a46b8acfb612"/></controller><serial type="unix"><source path="/var/run/ovirt-vmconsole-console/7186c24a-ed88-469c-8111-79853e428378.sock" mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"/></channel><interface type="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias name="ua-2ec46a19-3531-40e8-a523-a0fe6f25eb24"/><mac address="00:16:3e:12:fc:2a"/><mtu size="1500"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"/></source><target dev="hdc" bus="ide"/><readonly/><alias name="ua-71167ce7-5d34-4a6a-a6da-4ad42c8a0ce2"/></disk><disk snapshot="no" type="block" device="disk"><target dev="vda" bus="virtio"/><source dev="/rhev/data-center/mnt/blockSD/459bbdcd-1e66-4088-b431-9a0c13741e12/images/ee407bc8-d809-4b4c-90b4-7c3fe107ece5/1fb73fc4-502d-4194-bae0-74fd461927aa"><seclabel model="dac" type="none" relabel="no"/></source><driver name="qemu" iothread="1" io="native" type="raw" error_policy="stop" cache="none"/><alias name="ua-ee407bc8-d809-4b4c-90b4-7c3fe107ece5"/><serial>ee407bc8-d809-4b4c-90b4-7c3fe107ece5</serial></disk><lease><key>1fb73fc4-502d-4194-bae0-74fd461927aa</key><lockspace>459bbdcd-1e66-4088-b431-9a0c13741e12</lockspace><target offset="LEASE-OFFSET:1fb73fc4-502d-4194-bae0-74fd461927aa:459bbdcd-1e66-4088-b431-9a0c13741e12" path="LEASE-PATH:1fb73fc4-502d-4194-bae0-74fd461927aa:459bbdcd-1e66-4088-b431-9a0c13741e12"/></lease></devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device mac_address="00:16:3e:12:fc:2a"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>1fb73fc4-502d-4194-bae0-74fd461927aa</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>ee407bc8-d809-4b4c-90b4-7c3fe107ece5</ovirt-vm:imageID><ovirt-vm:domainID>459bbdcd-1e66-4088-b431-9a0c13741e12</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>'}) from=::1,59910, vmId=7186c24a-ed88-469c-8111-79853e428378 (api:48) 2019-12-11 12:09:32,719+0100 INFO (jsonrpc/0) [api.virt] FINISH create return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'numOfIoThreads': '1', 'vmId': '7186c24a-ed88-469c-8111-79853e428378', 'memGuaranteedSize': 1024, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'EPYC', 'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml': u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><uuid>7186c24a-ed88-469c-8111-79853e428378</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">Red Hat</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">7186c24a-ed88-469c-8111-79853e428378</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer name="hpet" present="no"/></clock><features><acpi/><vmcoreinfo/></features><cpu match="exact"><model>EPYC</model><feature name="ibpb" policy="require"/><feature name="virt-ssbd" policy="require"/><topology cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" memory="16777216"/></numa></cpu><cputune/><devices><input type="mouse" bus="ps2"/><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/7186c24a-ed88-469c-8111-79853e428378.ovirt-guest-agent.0"/></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/7186c24a-ed88-469c-8111-79853e428378.org.qemu.guest_agent.0"/></channel><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"/><channel name="inputs" mode="secure"/><channel name="cursor" mode="secure"/><channel name="playback" mode="secure"/><channel name="record" mode="secure"/><channel name="display" mode="secure"/><channel name="smartcard" mode="secure"/><channel name="usbredir" mode="secure"/><listen type="network" network="vdsm-ovirtmgmt"/></graphics><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias name="ua-3647db03-428d-4107-ab75-5abff66540d7"/></video><console type="unix"><source path="/var/run/ovirt-vmconsole-console/7186c24a-ed88-469c-8111-79853e428378.sock" mode="bind"/><target type="serial" port="0"/><alias name="ua-4f0f0f09-4e11-48b5-aeba-f49895db4f8a"/></console><memballoon model="virtio"><stats period="5"/><alias name="ua-66ca032f-b6b7-4f77-8dd1-d83474e6a417"/></memballoon><sound model="ich6"><alias name="ua-80d95fce-a0a1-4182-8c28-167abaacd78a"/></sound><controller type="usb" model="piix3-uhci" index="0"/><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias name="ua-91ed56a4-ef19-4a04-a3e1-01ac68b18ae5"/></controller><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"/></graphics><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-bcb1e545-9329-4346-b23d-5cf415447519"/></rng><controller type="virtio-serial" index="0" ports="16"><alias name="ua-e6747506-6c14-4afa-b7fb-a46b8acfb612"/></controller><serial type="unix"><source path="/var/run/ovirt-vmconsole-console/7186c24a-ed88-469c-8111-79853e428378.sock" mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"/></channel><interface type="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias name="ua-2ec46a19-3531-40e8-a523-a0fe6f25eb24"/><mac address="00:16:3e:12:fc:2a"/><mtu size="1500"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"/></source><target dev="hdc" bus="ide"/><readonly/><alias name="ua-71167ce7-5d34-4a6a-a6da-4ad42c8a0ce2"/></disk><disk snapshot="no" type="block" device="disk"><target dev="vda" bus="virtio"/><source dev="/rhev/data-center/mnt/blockSD/459bbdcd-1e66-4088-b431-9a0c13741e12/images/ee407bc8-d809-4b4c-90b4-7c3fe107ece5/1fb73fc4-502d-4194-bae0-74fd461927aa"><seclabel model="dac" type="none" relabel="no"/></source><driver name="qemu" iothread="1" io="native" type="raw" error_policy="stop" cache="none"/><alias name="ua-ee407bc8-d809-4b4c-90b4-7c3fe107ece5"/><serial>ee407bc8-d809-4b4c-90b4-7c3fe107ece5</serial></disk><lease><key>1fb73fc4-502d-4194-bae0-74fd461927aa</key><lockspace>459bbdcd-1e66-4088-b431-9a0c13741e12</lockspace><target offset="LEASE-OFFSET:1fb73fc4-502d-4194-bae0-74fd461927aa:459bbdcd-1e66-4088-b431-9a0c13741e12" path="LEASE-PATH:1fb73fc4-502d-4194-bae0-74fd461927aa:459bbdcd-1e66-4088-b431-9a0c13741e12"/></lease></devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device mac_address="00:16:3e:12:fc:2a"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>1fb73fc4-502d-4194-bae0-74fd461927aa</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>ee407bc8-d809-4b4c-90b4-7c3fe107ece5</ovirt-vm:imageID><ovirt-vm:domainID>459bbdcd-1e66-4088-b431-9a0c13741e12</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>', 'smpCoresPerSocket': '4', 'kvmEnable': 'true', 'bootMenuEnable': 'false', 'devices': [], 'custom': {}, 'maxVCpus': '64', 'statusTime': '4375019810', 'vmName': 'HostedEngine', 'maxMemSlots': 16}} from=::1,59910, vmId=7186c24a-ed88-469c-8111-79853e428378 (api:54) 2019-12-11 12:09:34,477+0100 ERROR (vm/7186c24a) [virt.vm] (vmId='7186c24a-ed88-469c-8111-79853e428378') The vm start process failed (vm:933) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 867, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2880, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: the CPU is incompatible with host CPU: Host-CPU stellt nicht die erforderlichen Funktionen bereit: virt-ssbd 2019-12-11 12:09:34,478+0100 INFO (vm/7186c24a) [virt.vm] (vmId='7186c24a-ed88-469c-8111-79853e428378') Changed state to Down: the CPU is incompatible with host CPU: Host-CPU stellt nicht die erforderlichen Funktionen bereit: virt-ssbd (code=1) (vm:1690) 2019-12-11 12:09:34,480+0100 INFO (vm/7186c24a) [virt.vm] (vmId='7186c24a-ed88-469c-8111-79853e428378') Stopping connection (guestagent:455)
Maybe duplicate of bug 1745181 ?
I am wondering if this RHEL 7.8 bug is going to help resovling this issue: BZ#1744281.
(In reply to Marina Kalinin from comment #6) > I am wondering if this RHEL 7.8 bug is going to help resovling this issue: > BZ#1744281. it should. And it should be testable on RHEL 8 too
Worked for me on ocelot05 ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 1 Model name: AMD EPYC 7451 24-Core Processor Stepping: 2 CPU MHz: 2898.364 CPU max MHz: 2300.0000 CPU min MHz: 1200.0000 BogoMIPS: 4599.43 Virtualization: AMD-V L1d cache: 32K L1i cache: 64K L2 cache: 512K L3 cache: 8192K NUMA node0 CPU(s): 0-5,24-29 NUMA node1 CPU(s): 6-11,30-35 NUMA node2 CPU(s): 12-17,36-41 NUMA node3 CPU(s): 18-23,42-47 rhvm-appliance.x86_64 2:4.4-20200403.0.el8ev ovirt-hosted-engine-ha-2.4.2-1.el8ev.noarch ovirt-hosted-engine-setup-2.4.4-1.el8ev.noarch Deployed over NFS.
Adding a "me too" comment for oVirt 4.4 hosted engine, new deployment on EPYC CPUs. The HostedEngineLocal VM runs fine during setup due to using a requirement of 'amd-ssbd' instead of 'virt-ssbd'. After copying everything to shared storage, setup tries to bring up the HostedEngine VM with a requirement for 'virt-ssbd' which is unavailable with qemu on CentOS 8.1 for EPYC CPUs, according to 'virsh domcapabilities'. So, HostedEngine can't start and you have the incompatible CPU error from the original report after your deploy finally fails out at "Waiting for VM status". You can 'virsh dumpxml HostedEngine | sed 's/virt-ssbd/amd-ssbd/' >/tmp/he.xml ; virsh create /tmp/he.xml' and the HostedEngine VM comes right up in a temporary fashion, allowing you to change the cluster CPU type from 'Secure AMD EPYC' to 'AMD EPYC'. After that change, restart the whole kit to get a working, but hacky, 4.4 cluster running. Probably not the fix you'd want for production. On CentOS 7.8 w/ EPYC, virt-ssbd is available, and I can do a fresh oVirt 4.3 hosted engine deploy to the hosts: centos7 ~]# virsh domcapabilities | grep ssbd <feature policy='require' name='virt-ssbd'/> On CentOS 8.1 w/ EPYC (same servers of course), virt-ssbd is not available/usable but oVirt 4.4 still configures the HE VM to require it, rendering it unstartable: centos8 ~]# virsh domcapabilities | grep ssbd <feature policy='require' name='amd-ssbd'/> ~]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7302 16-Core Processor Stepping: 0 CPU MHz: 3287.912 BogoMIPS: 5989.27 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
4.4 is on el8 and there the relevant bug is bug 1797092
(In reply to Mark R. from comment #12) > You can 'virsh dumpxml HostedEngine | sed 's/virt-ssbd/amd-ssbd/' > >/tmp/he.xml ; virsh create /tmp/he.xml' and the HostedEngine VM comes right > up in a temporary fashion, allowing you to change the cluster CPU type from > 'Secure AMD EPYC' to 'AMD EPYC'. After that change, restart the whole kit to > get a working, but hacky, 4.4 cluster running. Probably not the fix you'd > want for production. Great! This saved my day. Being new to all of this, could you give me a hint what's left to do to complete the "hosted-engine --deploy" invocation which was left unfinished due to the problem?
(In reply to Michael Lipp from comment #15) > (In reply to Mark R. from comment #12) > > You can 'virsh dumpxml HostedEngine | sed 's/virt-ssbd/amd-ssbd/'... > > Great! This saved my day. As there is still some activity around this bug report, I should maybe mention that Mark R.'s comment put me on the right track (it made me understand the problem). The eventual solution in my case, however, is the workaround described in https://bugzilla.redhat.com/show_bug.cgi?id=1798004#c16.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246