Bug 1103672 - hosted-engine --deploy does not check for NX flag
Summary: hosted-engine --deploy does not check for NX flag
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.5.0
Assignee: Simone Tiraboschi
QA Contact: Nikolai Sednev
URL:
Whiteboard: integration
Depends On:
Blocks: 1119145 rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-06-02 10:33 UTC by Evgheni Dereveanchin
Modified: 2019-04-28 09:32 UTC (History)
14 users (show)

Fixed In Version: ovirt-3.5.0-beta1.1
Doc Type: Bug Fix
Doc Text:
Previously, if the NX flag was not checked in the BIOS of certain Intel CPU types, which require NX as well as VMX to support virtualization, the deployment script for the hosted engine would not accurately detect the CPU type and the deployment would stall. Now, the user is prompted to check the NX flag in the system BIOS if the CPU type is not accurately detected, and the deployment exits gracefully.
Clone Of:
: 1119145 (view as bug list)
Environment:
Last Closed: 2015-02-11 20:39:56 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1100326 0 medium CLOSED caps._getCompatibleCpuModels() returns an empty set for Intel(R) Xeon(R) CPU X5450 @ 3.00GHz 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2015:0161 0 normal SHIPPED_LIVE ovirt-hosted-engine-setup bug fix and enhancement update 2015-12-07 21:35:11 UTC
oVirt gerrit 29739 0 master MERGED packaging: setup: adding an error message if no compatible CPUs are detected Never
oVirt gerrit 29934 0 ovirt-hosted-engine-setup-1.2 MERGED packaging: setup: adding an error message if no compatible CPUs are detected Never

Internal Links: 1100326

Description Evgheni Dereveanchin 2014-06-02 10:33:37 UTC
Description of problem:
when using a CPU with VMX flag present but NX flag absent in CPU features, 

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch

How reproducible:
always

Steps to Reproduce:
1. install RHEL 6.5 on a server with VMX but without NX flag
2. install ovirt-hosted-engine-setup
3. run "hosted-engine --deploy"

Actual results:
4. CPU check succeeds
5. libvirt does not detect the CPU family and installation is stuck on CPU type selection for the cluster.

Expected results:
4. CPU check fails, installation is aborted

Additional info:
we advertise NX support in docs, but the script does not check for it and lets installation continue.
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.2/html/Hypervisor_Deployment_Guide/chap-Deployment_Guide-Requirements_and_limitations_of_Red_Hat_Enterprise_Virtualization_Hypervisors.html#idp19963536

Comment 1 Sandro Bonazzola 2014-06-03 16:17:42 UTC
Hosted engine just ask libvirt (bug #1100326) for a list of compatible cpus and ovirt-host-deploy for virtualization support.
So I think otopi.ovirt_host_deploy.hardware hardware.detect should do that check and report virtualization not available.
A hint as "NX flag not enabled" exception would be useful for providing user more accurate information.
Alon, feel free to take the bug if you have time for working on it.

Comment 2 Alon Bar-Lev 2014-06-03 21:39:10 UTC
I am unsure I understand why NX is required for virtualization. The CPU should be detected and be usable anyway, just libvirt detect it as different CPU.

Comment 3 Sandro Bonazzola 2014-06-04 06:43:57 UTC
caps._getCompatibleCpuModels() returns an empty set if cpu model is lower than Conroe. And without NX it's not detected as >= Conroe.

Comment 4 Alon Bar-Lev 2014-06-04 06:48:06 UTC
(In reply to Sandro Bonazzola from comment #3)
> caps._getCompatibleCpuModels() returns an empty set if cpu model is lower
> than Conroe. And without NX it's not detected as >= Conroe.

So what is the difference between this and having a different check? when you get empty, you state virtualization is not supported.

Comment 5 Alon Bar-Lev 2014-06-04 06:50:29 UTC
(In reply to Alon Bar-Lev from comment #4)
> (In reply to Sandro Bonazzola from comment #3)
> > caps._getCompatibleCpuModels() returns an empty set if cpu model is lower
> > than Conroe. And without NX it's not detected as >= Conroe.
> 
> So what is the difference between this and having a different check? when
> you get empty, you state virtualization is not supported.

I mean virtualization is supported without nx with selected models, in hardware at host-deploy we do not check for cpu type nor interact with libvirt...

Comment 6 Sandro Bonazzola 2014-06-04 06:54:47 UTC
(In reply to Alon Bar-Lev from comment #4)
> (In reply to Sandro Bonazzola from comment #3)
> > caps._getCompatibleCpuModels() returns an empty set if cpu model is lower
> > than Conroe. And without NX it's not detected as >= Conroe.
> 
> So what is the difference between this and having a different check? when
> you get empty, you state virtualization is not supported.

I can do like that, but I would have preferred an explicit check on the NX flag when vmx flag is available in order to tell user why virtualization is not available.
But I guess that saying "Hardware virtualization support is not available: please check BIOS settings and turn on NX support if available" may be enough.

Comment 8 Nikolai Sednev 2014-12-28 16:20:06 UTC
Failed:
I've tested it on two Intel(R) Xeon(R) CPU E5420  @ 2.50GHz on RedHat 7.0 hosts and on one Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz on RedHat 7.0 host with these components:
vdsm-4.16.8.1-4.el7ev.x86_64
qemu-kvm-rhev-1.5.3-60.el7_0.11.x86_64
libvirt-client-1.2.8-10.el7.x86_64
sanlock-3.2.2-2.el7.x86_64
mom-0.4.1-4.el7ev.noarch
Linux version 3.10.0-217.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-7) (GCC) ) #1 SMP Fri Dec 12 14:52:08 EST 2014


I've disabled NX flag on my Intel hosts and with installed RHEL7.0 on it, the hosted-engine deployment continued without detecting that virtualization actually turned-off/not supported: 




[root@black-vdsb ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing              
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup            
          Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.
          Are you sure you want to continue? (Yes, No)[Yes]:                                                                              
          Configuration files: []                                                                                                         
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141228163322-m5anhj.log                                
          Version: otopi-1.3.0 (otopi-1.3.0-2.el7ev)                                                                                      
          It has been detected that this program is executed through an SSH connection without using screen.                              
          Continuing with the installation may lead to broken installation if the network connection fails.                               
          It is highly recommended to abort the installation and run it inside a screen session using command "screen".                   
          Do you want to continue anyway? (Yes, No)[No]: yes                                                                              
[ INFO  ] Hardware supports virtualization                                                                                                
[ INFO  ] Stage: Environment packages setup                                                                                               
[ INFO  ] Stage: Programs detection                                                                                                       
[ INFO  ] Stage: Environment setup                                                                                                        
[ INFO  ] Waiting for VDSM hardware info                                                                                                  
[ INFO  ] Waiting for VDSM hardware info                                                                                                  
[ INFO  ] Generating libvirt-spice certificates                                                                                           
[ INFO  ] Stage: Environment customization                                                                                                
                                                                                                                                          
          --== STORAGE CONFIGURATION ==--                                                                                                 
                                                                                                                                          
          During customization use CTRL-D to abort.                                                                                       
          Please specify the storage you would like to use (iscsi, nfs3, nfs4)[nfs3]: nfs3                                                
          Please specify the full shared storage connection path to use (example: host:/path): 10.35.160.108:/RHEV/nsednev_ovirt_3_5_HE_for_disks
[ INFO  ] Installing on first host                                                                                                               
          Please provide storage domain name. [hosted_storage]:                                                                                  
          Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.Please enter local datacenter name [hosted_datacenter]:                                                                                                                                              
                                                                                                                                                           
          --== SYSTEM CONFIGURATION ==--                                                                                                                   
                                                                                                                                                           
                                                                                                                                                           
          --== NETWORK CONFIGURATION ==--                                                                                                                  
                                                                                                                                                           
          Please indicate a nic to set rhevm bridge on: (enp4s0, enp6s0) [enp4s0]: enp4s0                                                                  
          iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]:                                                       
          Please indicate a pingable gateway IP address [10.35.64.254]:                                                                                    
                                                                                                                                                           
          --== VM CONFIGURATION ==--                                                                                                                       
                                                                                                                                                           
          Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: pxe                                                                    
          Please specify an alias for the Hosted Engine image [hosted_engine]:                                                                             
          The following CPU types are supported by this host:                                                                                              
                 - model_Penryn: Intel Penryn Family                                                                                                       
                 - model_Conroe: Intel Conroe Family                                                                                                       
          Please specify the CPU type to be used by the VM [model_Penryn]:                                                                                 
          Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]:                                                       
          Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]:                                                              
          You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:59:69:97]: 00:16:3E:7B:B8:53                   
          Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]:                                                          
          Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:                                                   
                                                                                                                                                           
          --== HOSTED ENGINE CONFIGURATION ==--                                                                                                            
                                                                                                                                                           
          Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]:                                       
          Enter 'admin@internal' user password that will be used for accessing the Administrator Portal:                                                   
          Confirm 'admin@internal' user password:                                                                                                          
          Please provide the FQDN for the engine you would like to use.                                                                                    
          This needs to match the FQDN that you will use for the engine installation within the VM.                                                        
          Note: This will be the FQDN of the VM you are now going to create,                                                                               
          it should not point to the base host or to any other existing machine.                                                                           
          Engine FQDN: nsednev-he-1.qa.lab.tlv.redhat.com                                                                                                  
          Please provide the name of the SMTP server through which we will send notifications [localhost]:                                                 
          Please provide the TCP port number of the SMTP server [25]:                                                                                      
          Please provide the email address from which notifications will be sent [root@localhost]:                                                         
          Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:                                          
[ INFO  ] Stage: Setup validation                                                                                                                          
                                                                                                                                                           
          --== CONFIGURATION PREVIEW ==--                                                                                                                  
                                                                                                                                                           
          Bridge interface                   : enp4s0                                                                                                      
          Engine FQDN                        : nsednev-he-1.qa.lab.tlv.redhat.com                                                                          
          Bridge name                        : rhevm                                                                                                       
          SSH daemon port                    : 22                                                                                                          
          Firewall manager                   : iptables                                                                                                    
          Gateway address                    : 10.35.64.254                                                                                                
          Host name for web application      : hosted_engine_1                                                                                             
          Host ID                            : 1                                                                                                           
          Image alias                        : hosted_engine                                                                                               
          Image size GB                      : 25                                                                                                          
          Storage connection                 : 10.35.160.108:/RHEV/nsednev_ovirt_3_5_HE_for_disks                                                          
          Console type                       : vnc                                                                                                         
          Memory size MB                     : 4096                                                                                                        
          MAC address                        : 00:16:3E:7B:B8:53                                                                                           
          Boot type                          : pxe                                                                                                         
          Number of CPUs                     : 2                                                                                                           
          CPU Type                           : model_Penryn

          Please confirm installation settings (Yes, No)[Yes]:
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Configuring the management bridge
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] Creating VM Image
[ INFO  ] Disconnecting Storage Pool
[ INFO  ] Start monitoring domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Creating VM
          You can now connect to the VM with the following command:
                /bin/remote-viewer vnc://localhost:5900
          Use temporary password "0832PUaZ" to connect to vnc console.
          Please note that in order to use remote-viewer you need to be able to run graphical applications.
          This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding).
          Otherwise you can run the command from a terminal in your preferred desktop environment.
          If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command:
          virsh -c qemu+tls://Test/system console HostedEngine
          If you need to reboot the VM you will need to start it manually using the command:
          hosted-engine --vm-start
          You can then set a temporary password using the command:
          hosted-engine --add-console-password
          The VM has been started.  Install the OS and shut down or reboot it.  To continue please make a selection:

          (1) Continue setup - VM installation is complete
          (2) Reboot the VM and restart installation
          (3) Abort setup
          (4) Destroy VM and abort setup

          (1, 2, 3, 4)[1]:





NX for some reason displayed as enabled in OS, although it was disabled in bios on all three hosts:

processor       : 3
vendor_id       : GenuineIntel
cpu family      : 6
model           : 23
model name      : Intel(R) Xeon(R) CPU           E5420  @ 2.50GHz
stepping        : 10
microcode       : 0xa0b
cpu MHz         : 2490.000
cache size      : 6144 KB
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4
apicid          : 3
initial apicid  : 3
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dtherm tpr_shadow vnmi flexpriority
bogomips        : 4987.44
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

Comment 9 Yedidyah Bar David 2014-12-29 07:08:11 UTC
(In reply to Nikolai Sednev from comment #8)
> Failed:
> I've tested it on two Intel(R) Xeon(R) CPU E5420  @ 2.50GHz on RedHat 7.0
> hosts and on one Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz on RedHat 7.0
> host with these components:
> vdsm-4.16.8.1-4.el7ev.x86_64
> qemu-kvm-rhev-1.5.3-60.el7_0.11.x86_64
> libvirt-client-1.2.8-10.el7.x86_64
> sanlock-3.2.2-2.el7.x86_64
> mom-0.4.1-4.el7ev.noarch
> Linux version 3.10.0-217.el7.x86_64
> (mockbuild.eng.bos.redhat.com) (gcc version 4.8.3 20140911
> (Red Hat 4.8.3-7) (GCC) ) #1 SMP Fri Dec 12 14:52:08 EST 2014
> 
> 
> I've disabled NX flag on my Intel hosts and with installed RHEL7.0 on it,
> the hosted-engine deployment continued without detecting that virtualization
> actually turned-off/not supported: 

Not sure that's a problem, based on comment 5 and 6.

(In reply to Evgheni Dereveanchin from comment #0)
> Description of problem:
> when using a CPU with VMX flag present but NX flag absent in CPU features, 
> 
> Version-Release number of selected component (if applicable):
> ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch
> 
> How reproducible:
> always
> 
> Steps to Reproduce:
> 1. install RHEL 6.5 on a server with VMX but without NX flag
> 2. install ovirt-hosted-engine-setup
> 3. run "hosted-engine --deploy"
> 
> Actual results:
> 4. CPU check succeeds
> 5. libvirt does not detect the CPU family and installation is stuck on CPU
> type selection for the cluster.

Was it stuck? Doesn't seem so from your log.

> 
> Expected results:
> 4. CPU check fails, installation is aborted

I guess that's the reason you decided it failed, but as Alon mentioned in comment 5, it might not always fail like that.

> 
> Additional info:
> we advertise NX support in docs, but the script does not check for it and
> lets installation continue.
> https://access.redhat.com/site/documentation/en-US/
> Red_Hat_Enterprise_Virtualization/3.2/html/Hypervisor_Deployment_Guide/chap-
> Deployment_Guide-
> Requirements_and_limitations_of_Red_Hat_Enterprise_Virtualization_Hypervisors
> .html#idp19963536

We might actually want to do report "nx disabled" specifically, perhaps as a warning, because even if virtualization works, the user might prefer to enable nx before continuing. We might want to do that also in host-deploy, for that matter.

Comment 10 Nikolai Sednev 2014-12-29 11:11:11 UTC
(In reply to Yedidyah Bar David from comment #9)
> (In reply to Nikolai Sednev from comment #8)
> > Failed:
> > I've tested it on two Intel(R) Xeon(R) CPU E5420  @ 2.50GHz on RedHat 7.0
> > hosts and on one Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz on RedHat 7.0
> > host with these components:
> > vdsm-4.16.8.1-4.el7ev.x86_64
> > qemu-kvm-rhev-1.5.3-60.el7_0.11.x86_64
> > libvirt-client-1.2.8-10.el7.x86_64
> > sanlock-3.2.2-2.el7.x86_64
> > mom-0.4.1-4.el7ev.noarch
> > Linux version 3.10.0-217.el7.x86_64
> > (mockbuild.eng.bos.redhat.com) (gcc version 4.8.3 20140911
> > (Red Hat 4.8.3-7) (GCC) ) #1 SMP Fri Dec 12 14:52:08 EST 2014
> > 
> > 
> > I've disabled NX flag on my Intel hosts and with installed RHEL7.0 on it,
> > the hosted-engine deployment continued without detecting that virtualization
> > actually turned-off/not supported: 
> 
> Not sure that's a problem, based on comment 5 and 6.
> 
> (In reply to Evgheni Dereveanchin from comment #0)
> > Description of problem:
> > when using a CPU with VMX flag present but NX flag absent in CPU features, 
> > 
> > Version-Release number of selected component (if applicable):
> > ovirt-hosted-engine-setup-1.1.2-5.el6ev.noarch
> > 
> > How reproducible:
> > always
> > 
> > Steps to Reproduce:
> > 1. install RHEL 6.5 on a server with VMX but without NX flag
> > 2. install ovirt-hosted-engine-setup
> > 3. run "hosted-engine --deploy"
> > 
> > Actual results:
> > 4. CPU check succeeds
> > 5. libvirt does not detect the CPU family and installation is stuck on CPU
> > type selection for the cluster.
> 
> Was it stuck? Doesn't seem so from your log.
> 
> > 
> > Expected results:
> > 4. CPU check fails, installation is aborted
> 
> I guess that's the reason you decided it failed, but as Alon mentioned in
> comment 5, it might not always fail like that.
> 
> > 
> > Additional info:
> > we advertise NX support in docs, but the script does not check for it and
> > lets installation continue.
> > https://access.redhat.com/site/documentation/en-US/
> > Red_Hat_Enterprise_Virtualization/3.2/html/Hypervisor_Deployment_Guide/chap-
> > Deployment_Guide-
> > Requirements_and_limitations_of_Red_Hat_Enterprise_Virtualization_Hypervisors
> > .html#idp19963536
> 
> We might actually want to do report "nx disabled" specifically, perhaps as a
> warning, because even if virtualization works, the user might prefer to
> enable nx before continuing. We might want to do that also in host-deploy,
> for that matter.


I've checked on 3 hosts, on one of them it forked as expected, PSB:
 hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
          Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.
          Are you sure you want to continue? (Yes, No)[Yes]:
          Configuration files: []
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141228184700-xrvly0.log
          Version: otopi-1.3.0 (otopi-1.3.0-2.el7ev)
          It has been detected that this program is executed through an SSH connection without using screen.
          Continuing with the installation may lead to broken installation if the network connection fails.
          It is highly recommended to abort the installation and run it inside a screen session using command "screen".
          Do you want to continue anyway? (Yes, No)[No]: yes
[ ERROR ] Failed to execute stage 'Environment setup': Hardware does not support virtualization
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20141228184708.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[root@alma03 ~]# rpm -qa ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-1.2.1-8.el7ev.noarch

But on two others it didn't quit the installation as expected, although virtualization was turned-off, I suspect that really old machines with virtualization support not properly supporting the NX flag, in that case I think that this should be taken with PM to decide if FAD (functions as designed) or should be investigated deeper and fixed.

Comment 11 Yedidyah Bar David 2014-12-29 12:39:46 UTC
(In reply to Nikolai Sednev from comment #10)
> 
> But on two others it didn't quit the installation as expected, although
> virtualization was turned-off,

NX off != virtualization off

> I suspect that really old machines with
> virtualization support not properly supporting the NX flag, in that case I
> think that this should be taken with PM to decide if FAD (functions as
> designed) or should be investigated deeper and fixed.

But did it actually work? Did you notice any problems?

As I wrote above, I am not sure about the details, but we *might* be able to detect that nx is available and turned off, and then *might* want to warn/alert the user that it might better be turned on. That's a separate issue, which affects also host-deploy.

Comment 12 Nikolai Sednev 2014-12-29 18:23:46 UTC
(In reply to Yedidyah Bar David from comment #11)
> (In reply to Nikolai Sednev from comment #10)
> > 
> > But on two others it didn't quit the installation as expected, although
> > virtualization was turned-off,
> 
> NX off != virtualization off
> 
> > I suspect that really old machines with
> > virtualization support not properly supporting the NX flag, in that case I
> > think that this should be taken with PM to decide if FAD (functions as
> > designed) or should be investigated deeper and fixed.
> 
> But did it actually work? Did you notice any problems?
> 
> As I wrote above, I am not sure about the details, but we *might* be able to
> detect that nx is available and turned off, and then *might* want to
> warn/alert the user that it might better be turned on. That's a separate
> issue, which affects also host-deploy.
I didn't managed to complete installation, as I aborted it at the part, where VM should be installed with OS for the engine, the problem was that it had to be stopped and additional problem is that I see nx flag on new host, although virtualization disabled on it for sure and deployment interrupted.

Comment 13 Yedidyah Bar David 2014-12-30 09:07:01 UTC
(In reply to Nikolai Sednev from comment #12)
> (In reply to Yedidyah Bar David from comment #11)
> > But did it actually work? Did you notice any problems?
> > 
> > As I wrote above, I am not sure about the details, but we *might* be able to
> > detect that nx is available and turned off, and then *might* want to
> > warn/alert the user that it might better be turned on. That's a separate
> > issue, which affects also host-deploy.
> I didn't managed to complete installation, as I aborted it at the part,

I wouldn't call this "didn't manage", then...

> where VM should be installed with OS for the engine, the problem was that it
> had to be stopped

Why did it have to be stopped? If because you, as user, knew that nx is (supposed to be) disabled and decided you want to stop and enable, fine - that's what I wrote above. But I want to know what would happen if you didn't stop:
1. Disable nx flag in the bios
2. hosted-engine --deploy
3. any problem found?

> and additional problem is that I see nx flag on new host,

Where exactly?

> although virtualization disabled on it for sure

What do you mean here? I thought you said that _you_ disabled it, in the bios setup.

> and deployment interrupted.

Comment 14 Nikolai Sednev 2015-01-15 14:55:24 UTC
(In reply to Yedidyah Bar David from comment #13)
> (In reply to Nikolai Sednev from comment #12)
> > (In reply to Yedidyah Bar David from comment #11)
> > > But did it actually work? Did you notice any problems?
> > > 
> > > As I wrote above, I am not sure about the details, but we *might* be able to
> > > detect that nx is available and turned off, and then *might* want to
> > > warn/alert the user that it might better be turned on. That's a separate
> > > issue, which affects also host-deploy.
> > I didn't managed to complete installation, as I aborted it at the part,
> 
> I wouldn't call this "didn't manage", then...
> 
> > where VM should be installed with OS for the engine, the problem was that it
> > had to be stopped
> 
> Why did it have to be stopped? If because you, as user, knew that nx is
> (supposed to be) disabled and decided you want to stop and enable, fine -
> that's what I wrote above. But I want to know what would happen if you
> didn't stop:
> 1. Disable nx flag in the bios
> 2. hosted-engine --deploy
> 3. any problem found?
> 
> > and additional problem is that I see nx flag on new host,
> 
> Where exactly?
> 
> > although virtualization disabled on it for sure
> 
> What do you mean here? I thought you said that _you_ disabled it, in the
> bios setup.
> 
> > and deployment interrupted.

During deployment NX should be detected, if turned off, then deployment have to be aborted.

I see NX flag while running "cat /proc/cpuinfo" on host with disabled NX in bios!

Deployment interrupted on newer hosts only, might be they support NX as should be, while old hosts has the NX in bios as disabled, but still with no effect on he-deployment, like nx disabled not being recognized at all.

Comment 15 Simone Tiraboschi 2015-01-19 08:51:36 UTC
Hi,
just to try to make it a bit more clear.

The originating issue was this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1100326

With Intel(R) Xeon(R) X5450 CPU, and similar ones, having VT enabled and NX disabled no compatible CPU was found.
And so hosted-engine prompted
 The following CPU types are supported by this host:
 
 Please specify the CPU type to be used by the VM []:
Allowing no valid entries.

hosted-engine doesn't really detect the CPU features, it simply get that information from VDSM which get it from libvirt.

So the solution is just that, if libvirt doesn't find any compatible CPU, hosted-engine instead of showing an empty list shows an hint for the user about checking VT/NX support. Nothing more.

(In reply to Nikolai Sednev from comment #14)
> During deployment NX should be detected, if turned off, then deployment have
> to be aborted.

Not really: not all the CPUs require NX support for VT support.
It should abort if no compatible CPU are found and continue otherwise. 

> I see NX flag while running "cat /proc/cpuinfo" on host with disabled NX in
> bios!

Maybe be it could be a bug in the bios or the bios could be smart enough to keep NX support on if the user requires VT support on a CPU where VT support requires NX support. 

> Deployment interrupted on newer hosts only, might be they support NX as
> should be, while old hosts has the NX in bios as disabled, but still with no
> effect on he-deployment, like nx disabled not being recognized at all.

The proper way to verify it is founding a CPU where VT requires NX and a bios where you can really disable it.

With 
 # vdsClient -s 0 getVdsCaps
you could ensure that getCompatibleCpuModels() returns an empty list.
In that situation you could verify this path.

Without it hosted-engine will prompt:
 The following CPU types are supported by this host:

 Please specify the CPU type to be used by the VM []:

With it hosted-engine should show:
 Hardware virtualization support is not available:
 please check BIOS settings and turn on NX support if available

Comment 16 Simone Tiraboschi 2015-01-19 08:51:37 UTC
Hi,
just to try to make it a bit more clear.

The originating issue was this one:
https://bugzilla.redhat.com/show_bug.cgi?id=1100326

With Intel(R) Xeon(R) X5450 CPU, and similar ones, having VT enabled and NX disabled no compatible CPU was found.
And so hosted-engine prompted
 The following CPU types are supported by this host:
 
 Please specify the CPU type to be used by the VM []:
Allowing no valid entries.

hosted-engine doesn't really detect the CPU features, it simply get that information from VDSM which get it from libvirt.

So the solution is just that, if libvirt doesn't find any compatible CPU, hosted-engine instead of showing an empty list shows an hint for the user about checking VT/NX support. Nothing more.

(In reply to Nikolai Sednev from comment #14)
> During deployment NX should be detected, if turned off, then deployment have
> to be aborted.

Not really: not all the CPUs require NX support for VT support.
It should abort if no compatible CPU are found and continue otherwise. 

> I see NX flag while running "cat /proc/cpuinfo" on host with disabled NX in
> bios!

Maybe be it could be a bug in the bios or the bios could be smart enough to keep NX support on if the user requires VT support on a CPU where VT support requires NX support. 

> Deployment interrupted on newer hosts only, might be they support NX as
> should be, while old hosts has the NX in bios as disabled, but still with no
> effect on he-deployment, like nx disabled not being recognized at all.

The proper way to verify it is founding a CPU where VT requires NX and a bios where you can really disable it.

With 
 # vdsClient -s 0 getVdsCaps
you could ensure that getCompatibleCpuModels() returns an empty list.
In that situation you could verify this path.

Without it hosted-engine will prompt:
 The following CPU types are supported by this host:

 Please specify the CPU type to be used by the VM []:

With it hosted-engine should show:
 Hardware virtualization support is not available:
 please check BIOS settings and turn on NX support if available

Comment 17 Nikolai Sednev 2015-01-19 13:26:44 UTC
Bios issue.
Very old bios appears to be installed or NX functionality not being disabled on my two old hosts, although they're reporting it as disabled.

Comment 19 errata-xmlrpc 2015-02-11 20:39:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0161.html


Note You need to log in before you can comment on or make changes to this bug.