Created attachment 703888 [details] engine.log Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. install 2 virtual machines using Fedora18 netinstall.iso. Select minimal install 2. on both system add ovirt repository. (sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm) 3. on first VM install ovirt-engine 4. on second VM install vdsm and vdsm-hook-faqemu. Configure enable fake_kvm_support in /etc/vdsm/vdsm.conf 5. add host in oVirt Engine Web Administration Actual results: in the events are these errors: Host ovirt-node installation failed. Command returned failure code 1 during SSH session 'root.1.70'. Failed to install Host ovirt-node. Failed to execute stage 'Setup validation': [Errno 5] Input/output error. Host status is: Install Failed Expected results: Host will be added Additional info:
Created attachment 703889 [details] host-deploy log - ovirt-20130228130710-10.1.1.70.log
Hi, can you please archive in tar all the /dev/cpu/*/msr files and attach? Thanks!
Created attachment 703901 [details] cpu.tar
(In reply to comment #3) > Created attachment 703901 [details] > cpu.tar Thanks but this is not good, as you archived the actual device instead the content. You need to take caution... perform cat /dev/cpu/0/msr and attach the result. Thanks!
(In reply to comment #4) > (In reply to comment #3) > > Created attachment 703901 [details] > > cpu.tar > > Thanks but this is not good, as you archived the actual device instead the > content. > > You need to take caution... perform cat /dev/cpu/0/msr and attach the result. > > Thanks! Hi, I did it but there is no result. It show nothing. "cat /dev/cpu/0/msr" command is running forever. Maybe rdmsr from msr-tools can help.
Created attachment 704423 [details] msr.py OK, let's try this script, can you please run and send me the output?
(In reply to comment #6) > Created attachment 704423 [details] > msr.py > > OK, let's try this script, can you please run and send me the output? Hi, here is the output. [root@ovirt-node ~]# python msr.py vmx Traceback (most recent call last): File "msr.py", line 52, in <module> t._vmx_enabled_by_bios() File "msr.py", line 29, in _vmx_enabled_by_bios msr = self._prdmsr(0, MSR_IA32_FEATURE_CONTROL) File "msr.py", line 17, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error svm Traceback (most recent call last): File "msr.py", line 57, in <module> t._svm_enabled_by_bios() File "msr.py", line 44, in _svm_enabled_by_bios vm_cr = self._prdmsr(0, MSR_VM_CR) File "msr.py", line 17, in _prdmsr ret = struct.unpack('L', f.read(8))[0] IOError: [Errno 5] Input/output error
Thanks! Before we debug this python farther, can you please install msr-tools package and send the output of: # rdmsr 0x3a
(In reply to comment #8) > Thanks! > Before we debug this python farther, can you please install msr-tools > package and send the output of: > # rdmsr 0x3a Hi, [root@ovirt-node ~]# rdmsr 0x3a rdmsr:pread: Input/output error
Haim, Do we have "Intel Core 2 Duo P9xxx (Penryn Class Core 2)" board?
Hi Alon, just to make sure... as I already mentione both machines are virtual. With standard QEMU cpu I was not able to add host to ovirt (same error). Than I change virtual CPU via virt-manage to "Intel Core 2 Duo P9xxx (Penryn Class Core 2)" by clicking on Copy Host CPU configuration" but the result is same. If you run your msr.py or rdmsr 0x3a on any KVM machine with "QEMU Virtual CPU version (cpu64-rhel6)" you should receive same error as I.
Oh!!! I did not understand these machine are virtual! Then this must a bug in qemu environment. Now it makes more sense. OK, I will just ignore this error and warn user. Thanks!
commit d749db45d759e243e377625679a35ee15efb78d3 Author: Alon Bar-Lev <alonbl> Date: Wed Mar 6 15:27:27 2013 +0200 vdsm: hardware: do not fail if msr cannot be read Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=916589 Change-Id: I96ff8f98b58f34ff2b1af4c0232150fd357ac018 Signed-off-by: Alon Bar-Lev <alonbl>
I am unsure this worth issue in 1.0.1 as this is nested virtualization.
closing as this should be in 3.3 (doing so in bulk, so may be incorrect)