Bug 1119845

Summary: nova volume-attach fails when Selinux is enabled and using Ceph/OSD
Product: Red Hat OpenStack Reporter: Keith Schincke <kschinck>
Component: openstack-selinuxAssignee: Ryan Hallisey <rhallise>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.0 (RHEL 7)CC: hbrock, kschinck, lhh, mgrepl, nlevinki, rhallise, scohen, slong, yeylon
Target Milestone: rc   
Target Release: 5.0 (RHEL 7)   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-selinux-0.5.14-3.el7ost Doc Type: Bug Fix
Doc Text:
In the previous release, SELinux in enforcing mode blocked the attachment of block storage using 'nova volume-attach'. As a result, Compute failed to attach block storage. With this update, the svirt process in SELinux has been updated and can now write to memory with the same label; Compute's 'nova volume-attach' now succeeds without being blocked by SELinux.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-07-24 17:23:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Keith Schincke 2014-07-15 15:59:57 UTC
Description of problem:
the nova volume-attach command returns success on the controller node but fails on the compute node when Selinux in enforcing on the compute nodes. 

Here are the audit logs from the compute node while Selinux is permissive:
type=AVC msg=audit(1405141351.574:1913): avc:  denied  { execstack } for  pid=22718 comm="qemu-kvm" scontext=system_u:system_r:svirt_t:s0:c62
,c1018 tcontext=system_u:system_r:svirt_t:s0:c62,c1018 tclass=process
type=AVC msg=audit(1405141351.574:1913): avc:  denied  { execmem } for  pid=22718 comm="qemu-kvm" scontext=system_u:system_r:svirt_t:s0:c62,c1018 tcontext=system_u:system_r:svirt_t:s0:c62,c1018 tclass=process


Version-Release number of selected component (if applicable):
OSP 5
RHEL 7

How reproducible:
100% while enforcing
0% while permissive

Steps to Reproduce:
1. Create a new volume with cinder, returns success
2. Attach volume to running image with nova, returns success and disk name (/dev/vdb)
3. Log into running image and cat /proc/partitions, no /dev/vdb is see.
4. Review audit logs for error message. 

Actual results:


Expected results:


Additional info:

Comment 2 Ryan Hallisey 2014-07-15 18:48:38 UTC
Can you duplicate your steps in permissive and attach your audit.log please?

Comment 3 Keith Schincke 2014-07-15 18:53:28 UTC
The audit logs while running in permissive mode are included in the description.

Comment 4 Ryan Hallisey 2014-07-15 19:01:16 UTC
Thanks missed that :)

Comment 5 Miroslav Grepl 2014-07-16 07:30:06 UTC
#============= svirt_t ==============

#!!!! This avc can be allowed using the boolean 'virt_use_execmem'
allow svirt_t self:process execmem;

So you want to run

# setsebool -P virt_use_execmem 1

Comment 7 Lon Hohberger 2014-07-17 14:25:42 UTC
So for now, we'll do setsebool -P virt_use_execmem 1 in %post

Comment 10 Yogev Rabl 2014-07-22 12:44:48 UTC
verified on:
openstack-nova-scheduler-2014.1.1-1.el7ost.noarch
python-nova-2014.1.1-1.el7ost.noarch
openstack-nova-conductor-2014.1.1-1.el7ost.noarch
openstack-nova-cert-2014.1.1-1.el7ost.noarch
openstack-nova-common-2014.1.1-1.el7ost.noarch
openstack-nova-compute-2014.1.1-1.el7ost.noarch
openstack-nova-api-2014.1.1-1.el7ost.noarch
openstack-nova-novncproxy-2014.1.1-1.el7ost.noarch
openstack-nova-console-2014.1.1-1.el7ost.noarch
openstack-nova-network-2014.1.1-1.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
openstack-cinder-2014.1.1-1.el7ost.noarch
python-cinder-2014.1.1-1.el7ost.noarch

Comment 12 errata-xmlrpc 2014-07-24 17:23:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0937.html