Bug 1049391 - openstack-nova-compute service fails with - libvirtError: internal error: CPU feature `avx' specified more than once
Summary: openstack-nova-compute service fails with - libvirtError: internal error: CP...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libvirt
Version: 20
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 1056144 (view as bug list)
Depends On:
Blocks: 1057251
TreeView+ depends on / blocked
 
Reported: 2014-01-07 14:18 UTC by Kashyap Chamarthy
Modified: 2014-02-28 18:32 UTC (History)
19 users (show)

Fixed In Version: libvirt-1.1.3.4-1.fc20
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1057251 (view as bug list)
Environment:
Last Closed: 2014-02-10 15:27:22 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
/var/log/nova/compute.log (70.14 KB, text/plain)
2014-01-07 14:22 UTC, Kashyap Chamarthy
no flags Details
/var/log/nova/api.log (32.09 KB, text/plain)
2014-01-07 14:23 UTC, Kashyap Chamarthy
no flags Details
nova.conf on Compute and Controller nodes (3.47 KB, text/plain)
2014-01-07 15:21 UTC, Kashyap Chamarthy
no flags Details
libvirt_test.py (758 bytes, text/plain)
2014-01-14 09:12 UTC, Attila Fazekas
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1267191 0 None None None Never

Description Kashyap Chamarthy 2014-01-07 14:18:43 UTC
Description of problem
----------------------

Restarting openstack-nova-compute serivce fails with:

  libvirtError: internal error: CPU feature `avx' specified more than once

Version
-------

    $ rpm -q openstack-nova libvirt qemu-system-x86
    openstack-nova-2014.1-0.4.b1.fc21.noarch
    libvirt-1.1.3.2-1.fc20.x86_64
    qemu-system-x86-1.6.1-3.fc20.x86_64


Test env
--------

A two node OpenStack RDO set-up configured manually on two Fedora 20
VMs:

  - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open
    vSwitch plugin and GRE tunneling).

  - Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


How reproducible: Consistently.


Steps to Reproduce
------------------

    $ systemctl restart openstack-nova-compute

Observe /var/log/nova/compute.log


Actual results 
--------------


$ tail -f /var/log/nova/compute.log
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 187, in doit
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 147, in proxy_call
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     rv = execute(f,*args,**kwargs)
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 76, in tworker
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     rv = meth(*args,**kwargs)
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3622, in baselineCPU
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     if ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup libvirtError: internal error: CPU feature `avx' specified more than once
2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup 


Expected results
----------------

Compute service should start successfully.

Additional info
---------------

Status of openstack-nova-compute service

$ systemctl status openstack-nova-compute
openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: inactive (dead) since Tue 2014-01-07 07:00:07 EST; 13min ago
  Process: 1529 ExecStart=/usr/bin/nova-compute --logfile /var/log/nova/compute.log (code=exited, status=0/SUCCESS)
 Main PID: 1529 (code=exited, status=0/SUCCESS)

Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/pyth...in doit
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     result = proxy_ca...kwargs)
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/pyth...xy_call
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     rv = execute(f,*a...kwargs)
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib/pyth...tworker
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     rv = meth(*args,**kwargs)
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup   File "/usr/lib64/py...lineCPU
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup     if ret is None: r...n=self)
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup libvirtError: interna...an once
Jan 07 07:00:07 node2-compute nova-compute[1529]: 2014-01-07 07:00:07.200 1529 TRACE nova.openstack.common.threadgroup

Comment 1 Kashyap Chamarthy 2014-01-07 14:21:17 UTC
More contextual trace from compute.log:

[. . .]
2012-12-10 22:12:38.789 1429 TRACE nova.virt.libvirt.driver 
2012-12-10 22:12:39.319 1429 ERROR nova.openstack.common.threadgroup [-] internal error: CPU feature `avx' specified more than once
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 117, in wait
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     x.wait()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 49, in wait
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 168, in wait
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 116, in wait
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 187, in switch
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 194, in main
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 448, in run_service
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     service.start()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     self.manager.pre_start_hook()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 822, in pre_start_hook
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     self.update_available_resource(nova.context.get_admin_context())
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4971, in update_available_resource
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     nodenames = set(self.driver.get_available_nodes())
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/driver.py", line 980, in get_available_nodes
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     stats = self.get_host_stats(refresh=refresh)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4569, in get_host_stats
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     return self.host_state.get_host_stats(refresh=refresh)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 429, in host_state
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     self._host_state = HostState(self)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4960, in __init__
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     self.update_status()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4999, in update_status
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     self.driver.get_instance_capabilities()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3702, in get_instance_capabilities
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     caps = self.get_host_capabilities()
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2742, in get_host_capabilities
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 187, in doit
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     result = proxy_call(self._autowrap, f, *args, **kwargs)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 147, in proxy_call
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     rv = execute(f,*args,**kwargs)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 76, in tworker
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     rv = meth(*args,**kwargs)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3622, in baselineCPU
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup     if ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup libvirtError: internal error: CPU feature `avx' specified more than once
2012-12-10 22:12:39.319 1429 TRACE nova.openstack.common.threadgroup 
2012-12-10 22:12:58.904 1467 WARNING nova.virt.libvirt.driver [req-88723bba-e677-4956-9b70-1f7fe66dbff7 None None] Cannot update service status on host: node2-compute,due to an unexpected exception.
[. . .]

Comment 2 Kashyap Chamarthy 2014-01-07 14:22:24 UTC
Created attachment 846696 [details]
/var/log/nova/compute.log

Comment 3 Kashyap Chamarthy 2014-01-07 14:23:29 UTC
Created attachment 846697 [details]
/var/log/nova/api.log

Comment 4 Kashyap Chamarthy 2014-01-07 15:21:43 UTC
Created attachment 846731 [details]
nova.conf on Compute and Controller nodes

Comment 5 Kashyap Chamarthy 2014-01-07 15:26:18 UTC
Additional info
---------------
A similar bug fix, but for LXC pointed out by Pádraig:

https://review.openstack.org/#/c/61310/ -- Change I5e4cb4a8: lxc: Fix a bug of 
baselineCPU parse failure

Comment 6 Kashyap Chamarthy 2014-01-09 13:08:21 UTC
Relevant Libvirt debug log obtained by

---------
After enabling the below /etc/libvirtd/libvirtd.conf

  log_level=1
  log_outputs="1:file:/var/tmp/libvirtd.log"


Restart libvirtd and compute service:

  $ systemctl restart libvirtd 
  $ systemctl restart openstack-nova-compute
---------


$ tail /var/tmp/libvirtd.log
---------
[. . .]
2014-01-09 12:52:09.213+0000: 8311: debug : virNetServerProgramDispatch:285 : prog=536903814 ver=1 type=0 status=0 serial=40 proc=162
2014-01-09 12:52:09.214+0000: 8311: debug : virObjectRef:293 : OBJECT_REF: obj=0x7ff668195570
2014-01-09 12:52:09.214+0000: 8311: debug : virObjectRef:293 : OBJECT_REF: obj=0x7ff668195570
2014-01-09 12:52:09.214+0000: 8311: debug : remoteDispatchConnectBaselineCPUHelper:126 : server=0x7ff68921b410 client=0x7ff689237d90 msg=0x7ff689236ca0 rerr=0x7ff678f25c80 args=0x7ff6681f0740 ret=0x7ff6681f1610
2014-01-09 12:52:09.214+0000: 8310: debug : virEventPollCalculateTimeout:340 : Got a timeout scheduled for 1389271934213
2014-01-09 12:52:09.214+0000: 8310: debug : virEventPollCalculateTimeout:353 : Schedule timeout then=1389271934213 now=1389271929214
2014-01-09 12:52:09.214+0000: 8310: debug : virEventPollCalculateTimeout:362 : Timeout at 1389271934213 due in 4999 ms
2014-01-09 12:52:09.214+0000: 8310: debug : virEventPollRunOnce:630 : EVENT_POLL_RUN: nhandles=10 timeout=4999
2014-01-09 12:52:09.214+0000: 8311: debug : virConnectBaselineCPU:18629 : conn=0x7ff658000c50, xmlCPUs=0x7ff6681f1570, ncpus=1, flags=1
2014-01-09 12:52:09.214+0000: 8311: debug : virConnectBaselineCPU:18632 : xmlCPUs[0]=<cpu>
  <arch>x86_64</arch>
  <model>Westmere</model>
  <vendor>Intel</vendor>
  <topology sockets="20" cores="1" threads="1"/>
  <feature name="rdtscp"/>
  <feature name="pdpe1gb"/>
  <feature name="hypervisor"/>
  <feature name="x2apic"/>
  <feature name="pcid"/>
  <feature name="vmx"/>
  <feature name="pclmuldq"/>
  <feature name="ss"/>
  <feature name="vme"/>
</cpu>

2014-01-09 12:52:09.214+0000: 8311: debug : virObjectRef:293 : OBJECT_REF: obj=0x7ff689226b00
2014-01-09 12:52:09.214+0000: 8311: debug : virAccessManagerCheckConnect:215 : manager=0x7ff689226b00(name=stack) driver=QEMU perm=1
2014-01-09 12:52:09.214+0000: 8311: debug : virAccessManagerCheckConnect:215 : manager=0x7ff68921aec0(name=none) driver=QEMU perm=1
2014-01-09 12:52:09.214+0000: 8311: debug : virObjectUnref:256 : OBJECT_UNREF: obj=0x7ff689226b00
2014-01-09 12:52:09.214+0000: 8311: debug : cpuBaselineXML:291 : ncpus=1, nmodels=0
2014-01-09 12:52:09.214+0000: 8311: debug : cpuBaselineXML:294 : xmlCPUs[0]=<cpu>
  <arch>x86_64</arch>
  <model>Westmere</model>
  <vendor>Intel</vendor>
  <topology sockets="20" cores="1" threads="1"/>
  <feature name="rdtscp"/>
  <feature name="pdpe1gb"/>
  <feature name="hypervisor"/>
  <feature name="x2apic"/>
  <feature name="pcid"/>
  <feature name="vmx"/>
  <feature name="pclmuldq"/>
  <feature name="ss"/>
  <feature name="vme"/>
</cpu>

2014-01-09 12:52:09.215+0000: 8311: debug : cpuBaseline:362 : ncpus=1, nmodels=0
2014-01-09 12:52:09.215+0000: 8311: debug : cpuBaseline:365 : cpus[0]=0x7ff6681930b0
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model Opteron_G5 differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model Opteron_G4 differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model Opteron_G3 differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model Opteron_G2 differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model Opteron_G1 differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model phenom differs from Intel; ignoring
2014-01-09 12:52:09.224+0000: 8311: debug : x86Decode:1399 : CPU vendor AMD of model athlon differs from Intel; ignoring
2014-01-09 12:52:09.225+0000: 8311: error : virCPUDefUpdateFeatureInternal:679 : internal error: CPU feature `avx' specified more than once
2014-01-09 12:52:09.225+0000: 8311: debug : virObjectUnref:256 : OBJECT_UNREF: obj=0x7ff668195570
2014-01-09 12:52:09.225+0000: 8311: debug : virNetServerProgramSendError:151 : prog=536903814 ver=1 proc=162 type=1 serial=40 msg=0x7ff689236ca0 rerr=0x7ff678f25c80
2014-01-09 12:52:09.225+0000: 8311: debug : virNetMessageEncodePayload:373 : Encode length as 208
2014-01-09 12:52:09.225+0000: 8311: debug : virNetServerClientSendMessageLocked:1451 : msg=0x7ff689236ca0 proc=162 len=208 offset=0
2014-01-09 12:52:09.225+0000: 8311: debug : virNetServerClientSendMessageLocked:1459 : RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x7ff689237d90 len=208 prog=536903814 vers=1 proc=162 type=1 status=1 serial=40
[. . .]
---------


Complete libvirt debug log: 

  http://kashyapc.fedorapeople.org/temp/libvirtd.log_bz_1049391.txt

Comment 7 Lars Kellogg-Stedman 2014-01-09 14:35:02 UTC
I have a more recent version of openstack-nova-compute:

# rpm -q openstack-nova-compute libvirt qemu-system-x86
openstack-nova-compute-2014.1-0.5.b1.fc21.noarch
libvirt-1.1.3.2-1.fc20.x86_64
qemu-system-x86-1.6.1-3.fc20.x86_64

On my Fedora 20 system I am able to successfully boot a Cirros instance.

Comment 8 Lars Kellogg-Stedman 2014-01-09 14:39:17 UTC
kashyap notes in irc that he is running with nested KVM while I am not.

Comment 9 Attila Fazekas 2014-01-14 09:09:24 UTC
This is the latest upstream nova in a not ``nested`` kvm :
http://www.fpaste.org/68186/89687471/
In the nova.conf  I have libvirt_type = qemu , on ``nested`` system it would be kvm.

On my setup it complains about a different flag:
internal error: CPU feature `rdtscp' specified more than once

After the following patch:
diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
index 73d47a2..b3736d1 100644
--- a/nova/virt/libvirt/driver.py
+++ b/nova/virt/libvirt/driver.py
@@ -2772,6 +2772,9 @@ class LibvirtDriver(driver.ComputeDriver):
                 except libvirt.VIR_ERR_NO_SUPPORT:
                     # Note(yjiang5): ignore if libvirt has no support
                     pass
+                except libvirt.libvirtError:
+                    LOG.exception('args in use %s' % str(([self._caps.host.cpu.to_xml()],
+                                  libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)))
         return self._caps
 
     def get_host_uuid(self):

I can gather this from the log file:

args in use (['<cpu>\n  <arch>x86_64</arch>\n  <model>Westmere</model>\n  <vendor>Intel</vendor>\n  <topology sockets="4" cores="1" threads="1"/>\n  <feature name="hypervisor"/>\n  <feature name="avx"/>\n  <feature name="osxsave"/>\n  <feature name="xsave"/>\n  <feature name="tsc-deadline"/>\n  <feature name="x2apic"/>\n  <feature name="pcid"/>\n  <feature name="pclmuldq"/>\n  <feature name="ss"/>\n  <feature name="vme"/>\n</cpu>\n'], 1)

Comment 10 Attila Fazekas 2014-01-14 09:12:52 UTC
Created attachment 849816 [details]
libvirt_test.py

With the attached file:
$ python libvirt_test.py                                                                                                                                              
VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES = 1                                                                                                                                                         
libvirt: CPU Driver error : internal error: CPU feature `rdtscp' specified more than once                                                                                                            
Traceback (most recent call last):                                                                                                                                                                   
  File "libvirt_test.py", line 29, in <module>                                                                                                                                                       
    ret = conn.baselineCPU([cpu], libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)                                                                                                                  
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3622, in baselineCPU
    if ret is None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
libvirt.libvirtError: internal error: CPU feature `rdtscp' specified more than once

Is it libvirt or nova issue ?

Comment 11 Jiri Denemark 2014-01-16 08:39:23 UTC
OK, this is apparently a bug in libvirtd code that handles VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES.

Comment 12 Kashyap Chamarthy 2014-01-27 04:32:12 UTC
*** Bug 1056144 has been marked as a duplicate of this bug. ***

Comment 13 Jiri Denemark 2014-01-27 23:25:03 UTC
Patches sent upstream for review: https://www.redhat.com/archives/libvir-list/2014-January/msg01314.html

Comment 14 Kashyap Chamarthy 2014-01-28 11:17:20 UTC
Thanks Jiri,

A quick initial testing with these patches
applied, I ran the reproducer script from comment 10 w/ a scratch build[2]:

  $ python libvirt-test.py
  VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES = 1

Seem to work fine. Still have to test w/ OpenStack set-up.


  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1049391#c10
  [2] http://koji.fedoraproject.org/koji/taskinfo?taskID=6462665

Comment 15 Kashyap Chamarthy 2014-01-28 12:43:54 UTC
With patches from Comment #13, and my minmal testing, restarting openstack-nova-compute service, it comes up just fine:

1. Stop all OpenStack services (on both Controller and Compute nodes)
  $ openstack-service stop   

2. Update libvirt RPMs with fixes from Comment #13

3. Restart Libvirt
$ systemctl restart libvirtd

4. Restart OpenStack Nova compute
$ systemctl status openstack-nova-compute -l
openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: active (running) since Tue 2014-01-28 06:50:10 EST; 2s ago
 Main PID: 2482 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─2482 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log

Jan 28 06:50:10 node2-compute systemd[1]: Started OpenStack Nova Compute Server.

Comment 16 Kyle Mestery 2014-01-28 18:00:53 UTC
So, I've tried the updated libvirt RPMs on x86_64, and on my setup, I still get the same failure with nova-compute. I have the following versions of RPMs installed per instructions below as well:


   $ wget http://people.redhat.com/mikeb/scripts/download-scratch.py
   $ http://people.redhat.com/mikeb/scripts/download-scratch.py
   $ ./download-scratch.py 6462665
   $ yum localupdate *.rpm


[kmestery@fedora-mac devstack]$ rpm -qa|grep libvirt
libvirt-daemon-1.1.3.3-2.fc20.x86_64
libvirt-daemon-qemu-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-secret-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-nodedev-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-uml-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-nwfilter-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-xen-1.1.3.3-2.fc20.x86_64
libvirt-client-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-storage-1.1.3.3-2.fc20.x86_64
libvirt-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-qemu-1.1.3.3-2.fc20.x86_64
libvirt-daemon-config-nwfilter-1.1.3.3-2.fc20.x86_64
libvirt-python-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-interface-1.1.3.3-2.fc20.x86_64
libvirt-daemon-config-network-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-lxc-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-vbox-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-libxl-1.1.3.3-2.fc20.x86_64
libvirt-daemon-driver-network-1.1.3.3-2.fc20.x86_64
libvirt-daemon-kvm-1.1.3.3-2.fc20.x86_64
[kmestery@fedora-mac devstack]$

Comment 17 Kyle Mestery 2014-01-28 18:17:01 UTC
Ignore comment 16. The RPMs failed to install due to a missing libvirt-python package. @kashyap confirmed this was split into it's own repository. To workaround this, I removed that package from my Fedora 20 host first, then used pip to install it. With that done, nova-compute now comes up on Fedora 20.

Thanks to @kashyap for providing these RPMs!

Comment 18 Kashyap Chamarthy 2014-01-28 18:53:54 UTC
Thanks for confirming, Kyle.

That's intentional as libvirt-python now lives in a separate git repository[1] and is also on PyPI[2], has its own releases[3]

Background:

  https://www.redhat.com/archives/libvir-list/2013-August/msg01525.html
  -- RFC: Splitting python binding out into a separate repo & ading to
  PyPi


[1] http://libvirt.org/git/?p=libvirt-python.git
[2] https://pypi.python.org/pypi/libvirt-python
[3] http://www.redhat.com/archives/libvir-list/2014-January/msg00715.html

Comment 19 Kashyap Chamarthy 2014-01-28 19:23:40 UTC
For later reference: to download Koji RPMs via CLI (instead of arbitrary scripts, you could use Koji's inbuilt utility to do it more elegantly):


  $ yum install koji -y

  $ koji latest-build rawhide libvirt-python | awk '{print $1}'
  Build
  ----------------------------------------
  libvirt-python-1.2.1-1.fc21

  $ koji download-build --arch=x86_64 libvirt-python-1.2.1-1.fc21

Comment 20 Fedora Update System 2014-01-30 20:10:46 UTC
libvirt-1.1.3.3-4.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/libvirt-1.1.3.3-4.fc20

Comment 21 Fedora Update System 2014-02-01 04:05:00 UTC
Package libvirt-1.1.3.3-4.fc20:
* should fix your issue,
* was pushed to the Fedora 20 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing libvirt-1.1.3.3-4.fc20'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2014-1878/libvirt-1.1.3.3-4.fc20
then log in and leave karma (feedback).

Comment 22 Fedora Update System 2014-02-01 20:41:22 UTC
libvirt-1.1.3.3-5.fc20,openwsman-2.4.3-1.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/libvirt-1.1.3.3-5.fc20,openwsman-2.4.3-1.fc20

Comment 23 Fedora Update System 2014-02-10 03:14:51 UTC
libvirt-1.1.3.3-5.fc20, openwsman-2.4.3-1.fc20 has been pushed to the Fedora 20 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 24 Fedora Update System 2014-02-19 01:00:33 UTC
libvirt-1.1.3.4-1.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/libvirt-1.1.3.4-1.fc20

Comment 25 Fedora Update System 2014-02-28 18:32:06 UTC
libvirt-1.1.3.4-1.fc20 has been pushed to the Fedora 20 stable repository.  If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.