Bug 1402561
Summary: | Unable to launch instance when /var/lib/nova/instances is mounted on an NFS share | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Marius Cornea <mcornea> | |
Component: | selinux-policy | Assignee: | Lukas Vrabec <lvrabec> | |
Status: | CLOSED ERRATA | QA Contact: | Milos Malik <mmalik> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 7.3 | CC: | amedeo.salvati, berrange, cshastri, dasmith, eglynn, fkrska, francesco.pan001, ipilcher, jhakimra, kchamart, lvrabec, mbooth, mcornea, mgrepl, mmalik, mschuppe, plautrba, pvrabec, rbryant, sbauza, sferdjao, sgordon, srevivo, ssekidde, vromanso | |
Target Milestone: | pre-dev-freeze | Keywords: | Regression, Reopened, ZStream | |
Target Release: | 7.3 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: |
Previously, an SELinux rule for the libvirtd virtualization server was missing. Consequently, when running SELinux in enforcing mode, starting new OpenStack instances failed if the /var/lib/nova/instances/ directory was an NFS share. The policy rule has been added, and starting instances from an NFS share now works as expected.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1442070 1469428 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-01 15:17:42 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1442070, 1469428 |
Description
Marius Cornea
2016-12-07 20:21:07 UTC
Seems you'd need to relabel the instances /var/lib/nova/instances to 'nova_var_lib_t'. Check BZ 1396518#c6 (In reply to Martin Schuppert from comment #1) > Seems you'd need to relabel the instances /var/lib/nova/instances to > 'nova_var_lib_t'. Check BZ 1396518#c6 Thanks, Martin. I mounted the NFS share with this context but now a different type of AVCs show up. I wonder if there's something that I'm missing here: fstab: 10.0.0.254:/srv/nfs/nova /var/lib/nova/instances nfs4 defaults,context=system_u:object_r:nova_var_lib_t:s0 0 0 ls -lZd /var/lib/nova/instances/ drwxrwxrwx. nova nova system_u:object_r:nova_var_lib_t:s0 /var/lib/nova/instances/ /var/log/audit/audit.log: type=AVC msg=audit(1481187048.386:2338): avc: denied { getattr } for pid=145245 comm="qemu-kvm" path="/var/lib/nova/instances/51f55bd4-abf4-4526-be0a-a7f0f8f4f381/disk" dev="0:43" ino=671104350 scontext=system_u:system_r:svirt_t:s0:c319,c997 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187048.387:2339): avc: denied { read } for pid=145245 comm="qemu-kvm" name="disk" dev="0:43" ino=671104350 scontext=system_u:system_r:svirt_t:s0:c319,c997 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187055.279:2387): avc: denied { read } for pid=145337 comm="qemu-kvm" name="disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187055.279:2388): avc: denied { getattr } for pid=145337 comm="qemu-kvm" path="/var/lib/nova/instances/02e49d53-9b4c-45c8-ac22-851eeb4f29c7/disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187055.279:2389): avc: denied { read } for pid=145337 comm="qemu-kvm" name="disk" dev="0:43" ino=771796179 scontext=system_u:system_r:svirt_t:s0:c461,c602 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187089.052:2453): avc: denied { read } for pid=145491 comm="qemu-kvm" name="disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187089.052:2454): avc: denied { getattr } for pid=145491 comm="qemu-kvm" path="/var/lib/nova/instances/ea6e538c-db05-4717-b954-04e9464af42f/disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187089.052:2455): avc: denied { read } for pid=145491 comm="qemu-kvm" name="disk" dev="0:43" ino=805334497 scontext=system_u:system_r:svirt_t:s0:c59,c678 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187097.292:2492): avc: denied { read } for pid=145585 comm="qemu-kvm" name="disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187097.292:2493): avc: denied { getattr } for pid=145585 comm="qemu-kvm" path="/var/lib/nova/instances/d0d46e4e-1cb8-4e9c-a08f-5084d0ad5f8a/disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file type=AVC msg=audit(1481187097.292:2494): avc: denied { read } for pid=145585 comm="qemu-kvm" name="disk" dev="0:43" ino=872449368 scontext=system_u:system_r:svirt_t:s0:c95,c536 tcontext=system_u:object_r:nova_var_lib_t:s0 tclass=file Had the same issue, virtlogd has access to nfs with the changed context, but qemu-kvm has no access to nfs share if it is mounted using nova_var_lib_t . What works in my lab env is: * mount nfs share with default nfs_t context: # mount |grep nova 192.168.122.1:/srv/nfs/nova on /var/lib/nova/instances type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.122.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.122.1) # ll -Z /var/lib/nova/ | grep instances drwxrwxrwx. nova nova system_u:object_r:nfs_t:s0 instances * make sure virt_use_nfs is set # getsebool virt_use_nfs virt_use_nfs --> on * use file stdio_handler handler for libvirt: # grep stdio_handler /etc/libvirt/qemu.conf #stdio_handler = "logd" stdio_handler = "file" * restart libvirtd # systemctl restart libvirtd # nova list +--------------------------------------+-------------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+------------------+ | 4bbbeb58-d305-46eb-86a7-858505f07111 | cirros-test | ACTIVE | - | Running | private=10.0.0.8 | +--------------------------------------+-------------+--------+------------+-------------+------------------+ As described in comment 5, I'm pretty confident this works when correctly configured. There's a KB about it here: https://access.redhat.com/articles/1323213 I'm going to close this to keep BZ tidy, but please feel free to re-open if the above doesn't resolve the problem. Hi, I am able to spawn and migrate the instances with NFS backend as per Comment#5, but I am still reopening this bug because of the security concerns involved in setting 'stdio_handler' value to 'file' in '/etc/libvirt/qemu.conf' as per the comments present in the configuration file: ~~~ # The backend to use for handling stdout/stderr output from # QEMU processes. # # 'file': QEMU writes directly to a plain file. This is the # historical default, but allows QEMU to inflict a # denial of service attack on the host by exhausting # filesystem space # # 'logd': QEMU writes to a pipe provided by virtlogd daemon. # This is the current default, providing protection # against denial of service by performing log file # rollover when a size limit is hit. # #stdio_handler = "logd" ~~~ I am able to reproduce this bug in RHOS 7 and I am sure that its present though versions 7 to 10. The 'stdio_handler = file' workaround should not be necessary. The fix is a selinux policy that allows virtlogd to write to NFS. We need to figure out if: 1. There's such a policy available in OSP, and how to enable it if it is and/or 2. Create such a policy if one doesn't already exist In order to do this I'm re-targeting this bug to selinux-policy. *** Bug 1442070 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1861 |