Description of problem: Version-Release number of selected component (if applicable): # rpm -qa |grep rdo rdo-release-icehouse-3.noarch # rpm -qa |grep openstack-nova openstack-nova-common-2014.1-2.el7.noarch openstack-nova-conductor-2014.1-2.el7.noarch openstack-nova-cert-2014.1-2.el7.noarch openstack-nova-console-2014.1-2.el7.noarch openstack-nova-scheduler-2014.1-2.el7.noarch openstack-nova-novncproxy-2014.1-2.el7.noarch openstack-nova-api-2014.1-2.el7.noarch and on the compute: # rpm -qa |grep openstack-nova openstack-nova-common-2014.1-2.el7.noarch openstack-nova-compute-2014.1-2.el7.noarch How reproducible: always Steps to Reproduce: 1.install the latest rdo release: 2. boot an instance #nova boot --flavor 2 --key-name mykey --image Fedora19 my_instance 3. the instancce will go from spawning to ERROR state and you will see the following error in /var/log/nova/nova-scheduler.log: 2014-04-27 15:06:41.994 22055 ERROR nova.scheduler.filter_scheduler [req-7c51e59e-2023-4b15-a0ac-780c577dce89 964bd032c07d49a5b00c8c4e78747d02 19e365be73df428fafbdf8d75415f70d] [instance: f716f638-3679-4e9b-b2ae-7e83f3fec029] Error from last host: server (server): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in decorated_function\n return function(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2253, in spawn\n block_device_info)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3644, in _create_domain_and_network\n power_on=power_on)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3547, in _create_domain\n domain.XMLDesc(0))\n', u' File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', u' File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3542, in _create_domain\n domain.createWithFlags(launch_flags)\n', u' File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 179, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u' File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 139, in proxy_call\n rv = execute(f,*args,**kwargs)\n', u' File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 77, in tworker\n rv = meth(*args,**kwargs)\n', u' File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in createWithFlags\n if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n', u'libvirtError: internal error: process exited while connecting to monitor: Warning: option deprecated, use lost_tick_policy property of kvm-pit instead.\nCould not access KVM kernel module: Permission denied\nfailed to initialize KVM: Permission denied\n\n'] Actual results: Expected results: the instance will boot properly Additional info:
It's not a Nova issue. It's a permission issue where KVM character device is not being accessible. This looks like a problem with an older Kernel, and now it should definitely be fixed in newer RHEL Kernels. Ensure permissions on your /dev/kvm character device are as below. It should be set by default: $ ls -l /dev/kvm crw-rw-rw-+ 1 root kvm 10, 232 Nov 26 12:35 /dev/kvm For future reference: for errors like these, please also provide the Kernel version. Please retest with Juno and post the results here.
doesn't happen anymore with latest Juno release. It was more then 6 mohths ago :)
As it's not reproducible anymore as per comment #2, closing the bug.