Bug 1015625 - quickstack: fail to attach a volume on glusterfs install of OpenStack because of selinux
Summary: quickstack: fail to attach a volume on glusterfs install of OpenStack because...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: 4.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: z1
: 4.0
Assignee: Obsolete, use brad@redhat.com instead
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-04 16:04 UTC by Dafna Ron
Modified: 2016-04-26 15:13 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
The SELinux policy boolean 'virt_use_fusefs' is set to OFF by default in Red Hat Enterprise Linux. As a result, GlusterFS volumes cannot be attached via PackStack during or after installation by OpenStack. Workaround: Set 'virt_use_fusefs=1' using setsebool, as follows: setsebool -P virt_use_fusefs=1 After this setting, SELinux will allow GlusterFS attachment, and thus GlusterFS volumes can be utilized by OpenStack.
Clone Of:
: 1052971 (view as bug list)
Environment:
Last Closed: 2014-01-23 14:21:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
compute log (1.86 MB, application/x-xz)
2013-10-04 16:04 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1235331 0 None None None Never
Red Hat Product Errata RHBA-2014:0046 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-01-23 00:51:59 UTC

Description Dafna Ron 2013-10-04 16:04:04 UTC
Created attachment 807761 [details]
compute log

Description of problem:

we fail to attach a volume on glusterfs install of packstack with the following error: 

ibvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk495' could not be initialized

if we set 'setsebool -P virt_use_fusefs=1' it will solve the issue. 


Version-Release number of selected component (if applicable):

openstack-packstack-2013.2.1-0.6.dev763.el6ost.noarch

How reproducible:

100%

Steps to Reproduce:
1. install glusterfs using packstack
2. create a volume 
3. attach a volume 


Actual results:

we fail to attach a volume because of selinux 

Expected results:

packstack should set the bool so that the user will not have to manually add the bool 


Additional info:

2013-10-04 18:57:55.286 2867 DEBUG qpid.messaging.io.raw [-] SENT[45d1320]: '\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x10\xf3' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480

2013-10-04 18:06:01.363 2867 ERROR nova.openstack.common.rpc.amqp [req-904331ad-7a95-4ac6-913b-bac58ddf1e41 c02995f25ba44cfab1a3cbd419f045a1 c77235c29fd0431a8e6628ef6d18e07f] Exception during message handling
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     **args)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     result = getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     payload)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     return f(self, context, *args, **kw)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 243, in decorated_function
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     pass
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in decorated_function
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 271, in decorated_function
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     e, sys.exc_info())
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in decorated_function
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3638, in attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     context, instance, mountpoint)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3633, in attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     mountpoint, instance)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3679, in _attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     connector)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3669, in _attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     encryption=encryption)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1105, in attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     disk_dev)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1092, in attach_volume
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     result = proxy_call(self._autowrap, f, *args, **kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     rv = execute(f,*args,**kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     rv = meth(*args,**kwargs)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 419, in attachDeviceFlags
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp libvirtError: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk495' could not be initialized
2013-10-04 18:06:01.363 2867 TRACE nova.openstack.common.rpc.amqp 
2013-10-04 18:06:01.366 2867 DEBUG qpid.messaging.io.raw [-] SENT[45d1320]: '\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x0
0\x08\x00\x00\x00\x00\x00\x00\x0b\x17' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] Making synchronous call on conductor ... multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:553
2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 44f8f8c1dea344f9908260230b781aa0 multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:556
2013-10-04 18:06:05.060 2867 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is b8d2b02277174c80bff1edbda00cbd33. _add_unique_id /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:341
2013-10-04 18:06:05.062 2867 DEBUG qpid.messaging.io.ops [-] SENT[45ef098]: ExchangeQuery(name='nova', id=serial(0), sync=True) write_op /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
2013-10-04 18:06:05.063 2867 DEBUG qpid.messaging.io.ops [-] SENT[45ef098]: QueueQuery(queue='nova', id=serial(1), sync=True) write_op /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
2013-10-04 18:06:05.064 2867 DEBUG qpid.messaging.io.raw [-] SENT[45ef098]: '\x0f\x01\x00\x17\x00\x01\x00\x00\x00\x00\x00\x00\x07\x03\x01\x01\x01\x00\x04nova\x0f\x01\x00\x17\x00\x01\x00\x00\x00\x00\x00\x00\x08\x04\x01\x01\x01\x00\x04nova
' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480

Comment 1 Dafna Ron 2013-10-04 16:17:04 UTC
bool should be enabled on the nova node

Comment 3 Martin Magr 2013-10-29 14:24:50 UTC
I guess this should be fixed in openstack-selinux package.

Comment 4 Lon Hohberger 2013-12-06 22:17:06 UTC
Typically, policy packages like openstack-selinux don't tweak booleans themselves.

We can certainly do it in %post, but I do think the right place is in a puppet module that configures glusterfs: not everyone using OpenStack will use glusterfs, but (probably) everyone will install openstack-selinux.

So, for now, I can add it as a workaround, but prefer it to be done on an only-if-needed basis.

Comment 5 Lon Hohberger 2013-12-06 22:18:32 UTC
One important thing is that packstack/foreman/whatever will have to ensure openstack-selinux is installed prior to glusterfs being configured.

Comment 7 Lon Hohberger 2013-12-09 16:24:42 UTC
This is best fixed when the system is provisioned rather than by a policy module.

Comment 8 Jason Guiditta 2013-12-09 16:31:50 UTC
Lon, do you want one of the guys who worked on gluster support to look at this? If quickstack puppet modules are the better place for this, I don'thave an issue with doing this the right way, we already have settings like:

if ($::selinux != "false"){
  selboolean { 'httpd_can_network_connect':
    value => on,
    persistent => true,
  }
}

If we leave it in the opensrtack-selinux package, I think it needs to be a docs step of 'If you wish to run selinux enforcing, install the openstack-selinux rpm'.  We currently do not explicitly depend on that, so adding a conditional to pull that package in seems less trivial than just a puppet rule (unless it magically happens somewhere for us already).

Comment 9 Jason Guiditta 2013-12-13 17:00:19 UTC
Bruce, this looks fine to me, let me know if you need anything else

Comment 10 Bruce Reeler 2013-12-16 02:00:29 UTC
(In reply to Jason Guiditta from comment #9)
> Bruce, this looks fine to me, let me know if you need anything else

Thanks Jason, fine as-is.

Comment 11 Brad P. Crochet 2014-01-09 18:32:37 UTC
PR submitted to astapor

https://github.com/redhat-openstack/astapor/pull/96

Comment 13 Yogev Rabl 2014-01-14 09:42:27 UTC
Though we have a workaround, the bug isn't fix:
The GlusterFS configuration with RHOS should be ready from as the default, without additional configuration. 
If the packstack can set SELinux booleans for other components, it should do the same with GlusterFS.

Comment 14 Yogev Rabl 2014-01-14 12:13:55 UTC
The versions are:
openstack-packstack-2013.2.1-0.22.dev956.el6ost.noarch
python-cinderclient-1.0.7-2.el6ost.noarch
python-cinder-2013.2.1-4.el6ost.noarch
openstack-cinder-2013.2.1-4.el6ost.noarch

The SElinux configuration on the cinder is:
getsebool virt_use_fusefs
virt_use_fusefs --> off

Comment 15 Brad P. Crochet 2014-01-14 13:25:42 UTC
This was tested against the RHOS 4.0.z puddle (2014-01-10.1) with quickstack (astapor) source

This fix is for Foreman usage. Sorry about the confusion. The title of the bug should have been changed.

If any node will act as a gluster client or server, and selinux is enabled, it will flip the virt_use_fusefs to on.

Comment 16 Yogev Rabl 2014-01-20 09:53:35 UTC
Verified that the SElinux boolean is properly configured:
# /usr/sbin/getsebool virt_use_fusefs
virt_use_fusefs --> on

Comment 19 Lon Hohberger 2014-02-04 17:19:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-0046.html


Note You need to log in before you can comment on or make changes to this bug.