Created attachment 807761 [details] compute log Description of problem: we fail to attach a volume on glusterfs install of packstack with the following error: ibvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk495' could not be initialized if we set 'setsebool -P virt_use_fusefs=1' it will solve the issue. Version-Release number of selected component (if applicable): openstack-packstack-2013.2.1-0.6.dev763.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. install glusterfs using packstack 2. create a volume 3. attach a volume Actual results: we fail to attach a volume because of selinux Expected results: packstack should set the bool so that the user will not have to manually add the bool Additional info: 2013-10-04 18:57:55.286 2867 DEBUG qpid.messaging.io.raw [-] SENT[45d1320]: '\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x10\xf3' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480 2013-10-04 18:06:01.363 2867 ERROR nova.openstack.common.rpc.amqp [req-904331ad-7a95-4ac6-913b-bac58ddf1e41 c02995f25ba44cfab1a3cbd419f045a1 c77235c29fd0431a8e6628ef6d18e07f] Exception during message handling 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp **args) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, method)(ctxt, **kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp payload) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 243, in decorated_function 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp pass 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in decorated_function 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 271, in decorated_function 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in decorated_function 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3638, in attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp context, instance, mountpoint) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3633, in attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp mountpoint, instance) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3679, in _attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp connector) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3669, in _attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp encryption=encryption) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1105, in attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp disk_dev) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1092, in attach_volume 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp virt_dom.attachDeviceFlags(conf.to_xml(), flags) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp result = proxy_call(self._autowrap, f, *args, **kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp rv = execute(f,*args,**kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp rv = meth(*args,**kwargs) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/site-packages/libvirt.py", line 419, in attachDeviceFlags 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp libvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk495' could not be initialized 2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp 2013-10-04 18:06:01.366 2867 DEBUG qpid.messaging.io.raw [-] SENT[45d1320]: '\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x0 0\x08\x00\x00\x00\x00\x00\x00\x0b\x17' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480 2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] Making synchronous call on conductor ... multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:553 2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 44f8f8c1dea344f9908260230b781aa0 multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:556 2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is b8d2b02277174c80bff1edbda00cbd33. _add_unique_id /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:341 2013-10-04 18:06:05.062 2867 DEBUG qpid.messaging.io.ops [-] SENT[45ef098]: ExchangeQuery(name='nova', id=serial(0), sync=True) write_op /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686 2013-10-04 18:06:05.063 2867 DEBUG qpid.messaging.io.ops [-] SENT[45ef098]: QueueQuery(queue='nova', id=serial(1), sync=True) write_op /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686 2013-10-04 18:06:05.064 2867 DEBUG qpid.messaging.io.raw [-] SENT[45ef098]: '\x0f\x01\x00\x17\x00\x01\x00\x00\x00\x00\x00\x00\x07\x03\x01\x01\x01\x00\x04nova\x0f\x01\x00\x17\x00\x01\x00\x00\x00\x00\x00\x00\x08\x04\x01\x01\x01\x00\x04nova ' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
bool should be enabled on the nova node
https://bugs.launchpad.net/packstack/+bug/1235331
I guess this should be fixed in openstack-selinux package.
Typically, policy packages like openstack-selinux don't tweak booleans themselves. We can certainly do it in %post, but I do think the right place is in a puppet module that configures glusterfs: not everyone using OpenStack will use glusterfs, but (probably) everyone will install openstack-selinux. So, for now, I can add it as a workaround, but prefer it to be done on an only-if-needed basis.
One important thing is that packstack/foreman/whatever will have to ensure openstack-selinux is installed prior to glusterfs being configured.
This is best fixed when the system is provisioned rather than by a policy module.
Lon, do you want one of the guys who worked on gluster support to look at this? If quickstack puppet modules are the better place for this, I don'thave an issue with doing this the right way, we already have settings like: if ($::selinux != "false"){ selboolean { 'httpd_can_network_connect': value => on, persistent => true, } } If we leave it in the opensrtack-selinux package, I think it needs to be a docs step of 'If you wish to run selinux enforcing, install the openstack-selinux rpm'. We currently do not explicitly depend on that, so adding a conditional to pull that package in seems less trivial than just a puppet rule (unless it magically happens somewhere for us already).
Bruce, this looks fine to me, let me know if you need anything else
(In reply to Jason Guiditta from comment #9) > Bruce, this looks fine to me, let me know if you need anything else Thanks Jason, fine as-is.
PR submitted to astapor https://github.com/redhat-openstack/astapor/pull/96
Though we have a workaround, the bug isn't fix: The GlusterFS configuration with RHOS should be ready from as the default, without additional configuration. If the packstack can set SELinux booleans for other components, it should do the same with GlusterFS.
The versions are: openstack-packstack-2013.2.1-0.22.dev956.el6ost.noarch python-cinderclient-1.0.7-2.el6ost.noarch python-cinder-2013.2.1-4.el6ost.noarch openstack-cinder-2013.2.1-4.el6ost.noarch The SElinux configuration on the cinder is: getsebool virt_use_fusefs virt_use_fusefs --> off
This was tested against the RHOS 4.0.z puddle (2014-01-10.1) with quickstack (astapor) source This fix is for Foreman usage. Sorry about the confusion. The title of the bug should have been changed. If any node will act as a gluster client or server, and selinux is enabled, it will flip the virt_use_fusefs to on.
Verified that the SElinux boolean is properly configured: # /usr/sbin/getsebool virt_use_fusefs virt_use_fusefs --> on
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-0046.html