Bug 965311 - AVC denial when attaching volume to RHOS 3 instance
AVC denial when attaching volume to RHOS 3 instance
Status: CLOSED WORKSFORME
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-selinux (Show other bugs)
3.0
Unspecified Unspecified
high Severity high
: rc
: 3.0
Assigned To: Miroslav Grepl
Brandon Perkins
: TestOnly
Depends On: 973776
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-20 17:37 EDT by Lon Hohberger
Modified: 2016-04-26 19:55 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 971594 (view as bug list)
Environment:
Last Closed: 2013-06-24 09:23:37 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Lon Hohberger 2013-05-20 17:37:38 EDT
Description of problem:
SSIA

[root@localhost ~]# grep AVC /var/log/audit/audit.log 
type=AVC msg=audit(1369085100.398:307633): avc:  denied  { read } for  pid=9144 comm="hald-probe-stor" path="/dev/sda" dev=devtmpfs ino=147635 scontext=system_u:system_r:hald_t:s0 tcontext=system_u:object_r:svirt_image_t:s0:c388,c537 tclass=blk_file

Version-Release number of selected component (if applicable): 
openstack-selinux-0.1.2-10.el6ost.noarch

How reproducible: Unknown; was not able to reproduce

Steps to Reproduce:
1. Fresh install
2. Create instance
3. Create volume
4. Attach instance to volume

Actual results: AVC denial

I will have to reproduce this and have specific steps.
Comment 1 Kashyap Chamarthy 2013-06-06 16:06:58 EDT
Possible test reproducer for attaching a volume to an instance:

------------------------------------------------
	# Create a cinder volume, and list it
	$ cinder create --display_name=bootable_volume 1
	$ cinder list

	# Add the cinder volume to env. This is the output of "cinder list"
	$ VOLUME_ID=2c370395-7f59-4c89-b312-ba35dbb986c0
	$ echo $VOLUME_ID

	# boot an instance
	$ nova boot --image e1b71961-d66d-4315-8e83-32aa1bd44f3f --flavor 1 --key_name oskey
f17-builder

	# Ensure the above booted instance is ACTIVE
	$ nova list | grep f17-builder
	$ cinder list
	
	# Attach a volume
	$ nova volume-attach f17-builder 2c370395-7f59-4c89-b312-ba35dbb986c0 /dev/vdb
	$ cinder list
	$ ssh -i oskey.priv 192.168.32.4
------------------------------------------------
Comment 2 Eric Harney 2013-06-06 16:08:36 EDT
I reproduced this (the logged AVCs) by following the same steps as above: install, boot vm, attach an LVM iSCSI Cinder volume.

But, it doesn't seem to break anything, and the attach works.

Unclear what is accessing /dev/sda, maybe something triggered by iscsiadm.

Same openstack-selinux ver, openstack-nova-* 2013.1.1-3.
Comment 3 Eric Harney 2013-06-06 16:17:21 EDT
Note: /dev/sda is the device node for the disk created when the iSCSI initiator logs in during volume attach.
Comment 4 Daniel Walsh 2013-06-06 17:00:28 EDT
This would be a general bug in policy and would happen with or without openstack, it should be fixed.
Comment 5 Perry Myers 2013-06-10 11:45:30 EDT
This is being fixed in selinux-policy bug # 971594 and so this bug is just TestOnly once we get updated selinux policy package
Comment 11 Kashyap Chamarthy 2013-06-14 09:04:47 EDT
Works here too:

Version:
========
[root@meadow ~(keystone_admin)]# rpm -q selinux-policy
selinux-policy-3.7.19-195.el6_4.11.noarch

[root@meadow ~(keystone_admin)]# getenforce 
Enforcing

Test:
=====
[root@meadow ~(keystone_admin)]# glance image-list
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| ID                                   | Name     | Disk Format | Container Format | Size      | Status |
+--------------------------------------+----------+-------------+------------------+-----------+--------+
| da497c6a-7574-41b8-835f-5875a8c28d56 | fedora17 | qcow2       | bare             | 251985920 | active |
+--------------------------------------+----------+-------------+------------------+-----------+--------+
[root@meadow ~(keystone_admin)]# nova boot --flavor 2 --key_name oskey1 --image \
> da497c6a-7574-41b8-835f-5875a8c28d56 fedora17-t1

[root@meadow ~(keystone_admin)]# VOLUME_ID=9d4a296c-42ad-4eeb-8ebd-0cf59ccf37d8

[root@meadow ~(keystone_admin)]# nova volume-attach fedora17-t1 $VOLUME_ID /dev/vdb
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| serverId | cb1cdaa6-554b-4ad5-a22a-c439254b1c81 |
| id       | 9d4a296c-42ad-4eeb-8ebd-0cf59ccf37d8 |
| volumeId | 9d4a296c-42ad-4eeb-8ebd-0cf59ccf37d8 |
+----------+--------------------------------------+


-> You could also further confirm the block device vdb is attached by doing:

[root@meadow ~(keystone_admin)]# sudo virsh list
 Id    Name                           State
----------------------------------------------------
 1     instance-00000001              running

[root@meadow ~(keystone_admin)]# sudo virsh domblklist 1
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/cb1cdaa6-554b-4ad5-a22a-c439254b1c81/disk
vdb        /dev/disk/by-path/ip-10.WW.YYY.ZZZ:3260-iscsi-iqn.2010-10.org.openstack:volume-9d4a296c-42ad-4eeb-8ebd-0cf59ccf37d8-lun-1
Comment 13 Ami Jeain 2013-06-24 09:23:37 EDT
agreed that we are going to close the bug, saying that we could not reproduce it. This bug is a tracker bug (Lon)

Note You need to log in before you can comment on or make changes to this bug.