RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1402561 - Unable to launch instance when /var/lib/nova/instances is mounted on an NFS share
Summary: Unable to launch instance when /var/lib/nova/instances is mounted on an NFS s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.3
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: pre-dev-freeze
: 7.3
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
: 1442070 (view as bug list)
Depends On:
Blocks: 1442070 1469428
TreeView+ depends on / blocked
 
Reported: 2016-12-07 20:21 UTC by Marius Cornea
Modified: 2020-12-21 19:39 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Previously, an SELinux rule for the libvirtd virtualization server was missing. Consequently, when running SELinux in enforcing mode, starting new OpenStack instances failed if the /var/lib/nova/instances/ directory was an NFS share. The policy rule has been added, and starting instances from an NFS share now works as expected.
Clone Of:
: 1442070 1469428 (view as bug list)
Environment:
Last Closed: 2017-08-01 15:17:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1861 0 normal SHIPPED_LIVE selinux-policy bug fix update 2017-08-01 17:50:24 UTC

Description Marius Cornea 2016-12-07 20:21:07 UTC
Description of problem:
SELinux is blocking launching instance if /var/lib/nova/instances is mounted on an NFS share:

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Deploy 1 x controller, 1 x compute with OSPd

2. On compute node mount /var/lib/libvirt/images on an NFS share. Add the following to fstab and mount -a:
10.0.0.254:/srv/nfs/nova /var/lib/nova/instances nfs4 defaults 0 0

mount | grep nova
10.0.0.254:/srv/nfs/nova on /var/lib/nova/instances type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.148,local_lock=none,addr=10.0.0.254)

3. setsebool -P virt_use_nfs 1

4. Launch an instance

Actual results:
/var/log/nova/nova-compute.log:
2016-12-07 20:17:43.999 43920 ERROR nova.compute.manager [req-a4292bd5-b3f4-43ed-a1f9-0d3083bf00cd 0450136018584892ae4b948004bf4bb9 ea88f38a6ddb401f9d19881ee19f2b07 - - -] [instance: 4c4ced4b-ac42-40b4-87c9-e9d1beea6327] Instance failed to spawn
2016-12-07 20:17:43.999 43920 ERROR nova.compute.manager [instance: 4c4ced4b-ac42-40b4-87c9-e9d1beea6327] libvirtError: Unable to open file: /var/lib/nova/instances/4c4ced4b-ac42-40b4-87c9-e9d1beea6327/console.log: Permission denied

/var/log/audit/audit.log:

type=AVC msg=audit(1481141863.755:696): avc:  denied  { search } for  pid=50197 comm="virtlogd" name="nova" dev="0:45" ino=1342284999 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1481141863.977:711): avc:  denied  { search } for  pid=50197 comm="virtlogd" name="nova" dev="0:45" ino=1342284999 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1481141871.660:748): avc:  denied  { search } for  pid=50197 comm="virtlogd" name="nova" dev="0:45" ino=1342284999 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1481141873.490:781): avc:  denied  { search } for  pid=50197 comm="virtlogd" name="nova" dev="0:45" ino=1342284999 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir
type=AVC msg=audit(1481141882.533:814): avc:  denied  { search } for  pid=50197 comm="virtlogd" name="nova" dev="0:45" ino=1342284999 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:nfs_t:s0 tclass=dir

ls -lZd /var/lib/nova/instances
drwxrwxrwx. nova nova system_u:object_r:nfs_t:s0       /var/lib/nova/instances

Expected results:
Launching an instance is successful.

Additional info:

I'm following the instruction at https://access.redhat.com/solutions/881063

Comment 1 Martin Schuppert 2016-12-08 07:53:36 UTC
Seems you'd need to relabel the instances /var/lib/nova/instances to 'nova_var_lib_t'. Check BZ 1396518#c6

Comment 2 Marius Cornea 2016-12-08 09:00:31 UTC
(In reply to Martin Schuppert from comment #1)
> Seems you'd need to relabel the instances /var/lib/nova/instances to
> 'nova_var_lib_t'. Check BZ 1396518#c6

Thanks, Martin. I mounted the NFS share with this context but now a different type of AVCs show up. I wonder if there's something that I'm missing here:

fstab:
10.0.0.254:/srv/nfs/nova  /var/lib/nova/instances nfs4 defaults,context=system_u:object_r:nova_var_lib_t:s0 0 0

ls -lZd /var/lib/nova/instances/
drwxrwxrwx. nova nova system_u:object_r:nova_var_lib_t:s0 /var/lib/nova/instances/

/var/log/audit/audit.log:
type=AVC msg=audit(1481187048.386:2338): avc:  denied  { getattr } for  pid=145245 comm="qemu-kvm" path="/var/lib/nova/instances/51f55bd4-abf4-4526-be0a-a7f0f8f4f381/disk" dev="0:43" ino=671104350 scontext=system_u:system_r:svirt_t:s0:c319,c997 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187048.387:2339): avc:  denied  { read } for  pid=145245 comm="qemu-kvm" name="disk" dev="0:43" ino=671104350 scontext=system_u:system_r:svirt_t:s0:c319,c997 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187055.279:2387): avc:  denied  { read } for  pid=145337 comm="qemu-kvm" name="disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187055.279:2388): avc:  denied  { getattr } for  pid=145337 comm="qemu-kvm" path="/var/lib/nova/instances/02e49d53-9b4c-45c8-ac22-851eeb4f29c7/disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187055.279:2389): avc:  denied  { read } for  pid=145337 comm="qemu-kvm" name="disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187089.052:2453): avc:  denied  { read } for  pid=145491 comm="qemu-kvm" name="disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187089.052:2454): avc:  denied  { getattr } for  pid=145491 comm="qemu-kvm" path="/var/lib/nova/instances/ea6e538c-db05-4717-b954-04e9464af42f/disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187089.052:2455): avc:  denied  { read } for  pid=145491 comm="qemu-kvm" name="disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187097.292:2492): avc:  denied  { read } for  pid=145585 comm="qemu-kvm" name="disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187097.292:2493): avc:  denied  { getattr } for  pid=145585 comm="qemu-kvm" path="/var/lib/nova/instances/d0d46e4e-1cb8-4e9c-a08f-5084d0ad5f8a/disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file
type=AVC msg=audit(1481187097.292:2494): avc:  denied  { read } for  pid=145585 comm="qemu-kvm" name="disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file

Comment 5 Martin Schuppert 2016-12-29 15:53:58 UTC
Had the same issue, virtlogd has access to nfs with the changed context, but qemu-kvm has no access to nfs share if it is mounted using nova_var_lib_t .

What works in my lab env is:

* mount nfs share with default nfs_t context:
# mount |grep nova
192.168.122.1:/srv/nfs/nova on /var/lib/nova/instances type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.122.1)

# ll -Z /var/lib/nova/ | grep instances
drwxrwxrwx. nova nova system_u:object_r:nfs_t:s0       instances

* make sure virt_use_nfs is set
# getsebool virt_use_nfs
virt_use_nfs --> on

* use file stdio_handler handler for libvirt:
# grep stdio_handler /etc/libvirt/qemu.conf
#stdio_handler = "logd"
stdio_handler = "file"

* restart libvirtd
# systemctl restart libvirtd

# nova list
+--------------------------------------+-------------+--------+------------+-------------+------------------+
| ID                                   | Name        | Status | Task State | Power State | Networks         |
+--------------------------------------+-------------+--------+------------+-------------+------------------+
| 4bbbeb58-d305-46eb-86a7-858505f07111 | cirros-test | ACTIVE | -          | Running     | private=10.0.0.8 |
+--------------------------------------+-------------+--------+------------+-------------+------------------+

Comment 6 Matthew Booth 2017-01-06 09:19:24 UTC
As described in comment 5, I'm pretty confident this works when correctly configured. There's a KB about it here:

  https://access.redhat.com/articles/1323213

I'm going to close this to keep BZ tidy, but please feel free to re-open if the above doesn't resolve the problem.

Comment 7 Chaitanya Shastri 2017-04-13 12:24:51 UTC
Hi,

   I am able to spawn and migrate the instances with NFS backend as per Comment#5, but I am still reopening this bug because of the security concerns involved in setting 'stdio_handler' value to 'file' in '/etc/libvirt/qemu.conf' as per the comments present in the configuration file:

~~~
# The backend to use for handling stdout/stderr output from
# QEMU processes.
#
#  'file': QEMU writes directly to a plain file. This is the
#          historical default, but allows QEMU to inflict a
#          denial of service attack on the host by exhausting
#          filesystem space
#
#  'logd': QEMU writes to a pipe provided by virtlogd daemon.
#          This is the current default, providing protection
#          against denial of service by performing log file
#          rollover when a size limit is hit.
#
#stdio_handler = "logd"
~~~

I am able to reproduce this bug in RHOS 7 and I am sure that its present though versions 7 to 10.

Comment 11 Artom Lifshitz 2017-04-21 14:35:11 UTC
The 'stdio_handler = file' workaround should not be necessary. The fix is a selinux policy that allows virtlogd to write to NFS. We need to figure out if:

1. There's such a policy available in OSP, and how to enable it if it is and/or
2. Create such a policy if one doesn't already exist

In order to do this I'm re-targeting this bug to selinux-policy.

Comment 12 Artom Lifshitz 2017-04-24 20:11:35 UTC
*** Bug 1442070 has been marked as a duplicate of this bug. ***

Comment 27 errata-xmlrpc 2017-08-01 15:17:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1861


Note You need to log in before you can comment on or make changes to this bug.