Description of problem: When a new VM is requested nova-compute fails to mount image for data injection because selinux policy. After changing selinux to "Permissive" nova-compute is able to mount the image and inject data as expected. nbd is not available in RHEL so nova tries to use guestfs. Version-Release number of selected component (if applicable): Using CERN Scientific Linux 6.3 nova package -> openstack-nova-2012.1.1-15.el6 How reproducible: Always Steps to Reproduce: 1. nova boot --image your_image --flavor your_flavor --file filetest=filetest vmtest 2. check openstack-nova-compute logs Actual results: guestmount fails to mount image. (openstack-nova-compute logs) 2012-11-13 09:21:20 INFO nova.virt.libvirt.connection [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] [instance: 6c4ca58b-f187-45db-8173-b7559a780512] Injecting metadata into image 95bd46d9-6c53-46b5-b3cf-9622b28f66cd 2012-11-13 09:21:20 INFO nova.virt.libvirt.connection [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] [instance: 6c4ca58b-f187-45db-8173-b7559a780512] Injecting key into image 95bd46d9-6c53-46b5-b3cf-9622b28f66cd 2012-11-13 09:21:20 DEBUG nova.virt.disk.api [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] nbd unavailable: module not loaded from (pid=30649) mount /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:205 2012-11-13 09:21:20 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Running cmd (subprocess): sudo nova-rootwrap guestmount --rw -a /var/lib/nova/instances/instance-00000093/disk -i /tmp/tmpVktmfj from (pid=30649) execute /usr/lib/python2.6/site-packages/nova/utils.py:220 2012-11-13 09:21:21 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Result was 1 from (pid=30649) execute /usr/lib/python2.6/site-packages/nova/utils.py:236 2012-11-13 09:21:21 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Unexpected error while running command. Command: sudo nova-rootwrap guestmount --rw -a /var/lib/nova/instances/instance-00000093/disk -i /tmp/tmpVktmfj Exit code: 1 Stdout: '' Stderr: 'libguestfs: error: guestfs_launch failed, see earlier error messages\n' from (pid=30649) trycmd /usr/lib/python2.6/site-packages/nova/utils.py:278 2012-11-13 09:21:21 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Running cmd (subprocess): sudo nova-rootwrap fusermount -u /tmp/tmpVktmfj from (pid=30649) execute /usr/lib/python2.6/site-packages/nova/utils.py:220 2012-11-13 09:21:21 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Result was 1 from (pid=30649) execute /usr/lib/python2.6/site-packages/nova/utils.py:236 2012-11-13 09:21:21 DEBUG nova.utils [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Unexpected error while running command. Command: sudo nova-rootwrap fusermount -u /tmp/tmpVktmfj Exit code: 1 Stdout: '' Stderr: '/bin/fusermount: failed to unmount /tmp/tmpVktmfj: Invalid argument\n' from (pid=30649) trycmd /usr/lib/python2.6/site-packages/nova/utils.py:278 2012-11-13 09:21:21 DEBUG nova.virt.disk.api [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] Failed to mount filesystem: Unexpected error while running command. Command: sudo nova-rootwrap guestmount --rw -a /var/lib/nova/instances/instance-00000093/disk -i /tmp/tmpVktmfj Exit code: 1 Stdout: '' Stderr: 'libguestfs: error: guestfs_launch failed, see earlier error messages\n' from (pid=30649) mount /usr/lib/python2.6/site-packages/nova/virt/disk/api.py:205 2012-11-13 09:21:21 WARNING nova.virt.libvirt.connection [req-467dff33-e22b-4b74-bedf-64aa16d37e4b b7aa0805440f41bfa69b000bb475a0eb 237745f6e81d4a8494eea1b168d73610] [instance: 6c4ca58b-f187-45db-8173-b7559a780512] Ignoring error injecting data into image 95bd46d9-6c53-46b5-b3cf-9622b28f66cd ( -- nbd unavailable: module not loaded -- Failed to mount filesystem: Unexpected error while running command. Command: sudo nova-rootwrap guestmount --rw -a /var/lib/nova/instances/instance-00000093/disk -i /tmp/tmpVktmfj Exit code: 1 Stdout: '' Stderr: 'libguestfs: error: guestfs_launch failed, see earlier error messages\n') Expected results: mount image for data injection. Additional info: in audit.log: type=AVC msg=audit(1352816002.979:249317): avc: denied { read } for pid=2806 comm="qemu-kvm" name="disk" dev=dm-3 ino=656740 scontext=unconfined_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:nova_var_lib_t:s0 tclass=file type=SYSCALL msg=audit(1352816002.979:249317): arch=c000003e syscall=2 success=no exit=-13 a0=7fae966dbc20 a1=800 a2=0 a3=65636e6174736e69 items=0 ppid=2797 pid=2806 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19511 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=unconfined_u:system_r:qemu_t:s0-s0:c0.c1023 key=(null) type=AVC msg=audit(1352816002.980:249318): avc: denied { getattr } for pid=2806 comm="qemu-kvm" path="/var/lib/nova/instances/instance-000000a7/disk" dev=dm-3 ino=656740 scontext=unconfined_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:nova_var_lib_t:s0 tclass=file type=SYSCALL msg=audit(1352816002.980:249318): arch=c000003e syscall=4 success=no exit=-13 a0=7fae966dbc20 a1=7fffedb37730 a2=7fffedb37730 a3=65636e6174736e69 items=0 ppid=2797 pid=2806 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19511 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=unconfined_u:system_r:qemu_t:s0-s0:c0.c1023 key=(null) type=AVC msg=audit(1352816002.980:249319): avc: denied { read } for pid=2806 comm="qemu-kvm" name="disk" dev=dm-3 ino=656740 scontext=unconfined_u:system_r:qemu_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:nova_var_lib_t:s0 tclass=file # setsebool -P allow_unconfined_qemu_transition 0 libsemanage.dbase_llist_set: record not found in the database (No such file or directory). libsemanage.dbase_llist_set: could not set record value (No such file or directory). Could not change boolean allow_unconfined_qemu_transition Could not change policy booleans selinux-policy-targeted-3.7.19-155.el6_3.4
I think this comes under a general theme of allowing services greater access to operations under /var/lib/nova. I.E. 1. qemu (libguestfs) would like to read images from /var/lib/nova/instances/... 2. The nova_var_lib_t context on /var/lib/nova needs to be configured to allow search access by sshd_t (for the migrate and resize feature) 3. The ssh_home_t context will need to be associated with /var/lib/nova/.ssh (for the migrate and resize feature) I would be great if we could at least provide commands to achieve/persist the above 3, so immediate workarounds could be documented.
Hi Pádraig, We recently encountered cases 2 and three, which we puppetized away. What about changing the %pre install scriptlet of openstack-nova-common to if ! getent passwd nova >/dev/null; then useradd -u 162 -r -g nova -G nova,nobody -d /var/lib/nova -s /sbin/nologin -c "OpenStack Nova Daemons" nova chcon -u system_u -r object_r -t user_home_t /var/lib/nova mkdir /var/lib/nova/.ssh chmod 700 /var/lib/nova/.ssh chcon -u system_u -r object_r -t ssh_home_t /var/lib/nova/.ssh fi
Thanks for that Belmiro. Those SELinux settings are generic and appropriate at the packaging level. The rest of the config as documented here: https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL#Migrate_and_Resize would be site specific and best done outside the package level. Note I'd probably add that SELinux config to the openstack-nova-compute subpackage only, as that's the only one that needs the migrate and resize feature, and use a single mkdir call like: # To support migrate and resize, using ssh chcon -u system_u -r object_r -t user_home_t /var/lib/nova mkdir -p -m 700 --context=system_u:object_r:ssh_home_t:s0 /var/lib/nova/.ssh BTW, I'm guessing in future this feature will be better integrated in nova upstream, and so may not need to rely on ssh directly in future? Anyway let's move this bug just back to the specific issue in comment #1
chcon-ing is not a proper way to make selinux file context changes, it should be done with semanage fcontext -a ... otherwise next restorecon will revert it. https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-SELinux_Contexts_Labeling_Files-Persistent_Changes_semanage_fcontext.html
These audit2allow logs from enakai from his compute node are probably related: #============= qemu_t ============== #!!!! The source type 'qemu_t' can write to a 'file' of the following types: # virt_image_type, virt_cache_t, xen_image_t, qemu_var_run_t, anon_inodefs_t, qemu_tmp_t, qemu_image_t, qemu_tmpfs_t, tmpfs_t, nfs_t, usbfs_t, cifs_t, dosfs_t allow qemu_t nova_var_lib_t:file { read write ioctl open getattr }; allow qemu_t self:capability dac_override; #============= svirt_t ============== allow svirt_t self:tun_socket relabelto; allow svirt_t virtd_t:tun_socket relabelfrom;
Ataching a policy module that solves the issue on RHEL 6.3 (simmilar to what is described in comment #5 ). All you need to do is: # checkmodule -m -M nova_qemu_compiled.te -o nova_qemu_compiled.mod # semodule_package -m nova_qemu_compiled.mod -o nova_qemu_compiled.pp # semodule -i nova_qemu_compiled.pp
Created attachment 652638 [details] policy module that fixes the issue
While the policy in comment #7 may work, probably a more abstract policy is appropriate. I.E. add central policies for /var/lib/nova/instances... to virt_image_t and /var/lib/nova/.ssh to ssh_home_t etc. Nikola was going to check whether the same issue applied to Fedora, (while double checking the more abstract settings work), and then reassign this bug to selinux-policy with details. A caveat to note is that nova may create directories in certain cases, especially when dealing with shared storage, so we may need to look at this upstream too in the ensure_tree() nova call, which may need to try and `restorecon the_new_tree` to cover these cases.
*** Bug 909832 has been marked as a duplicate of this bug. ***
We're seeing this behaviour with Folsom from EPEL and RHEL 6.4. Looking at the issue it sounds like it should be fixed someplace in core SELinux policies (that ship nova module BTW). Those don't seem to have been addressed currently? Should we expect fixes to come "down the pipe" in RHEL 6.5 or shall we roll our own like Nikola did?
This message is a reminder that EPEL 6 is nearing its end of life. Fedora will stop maintaining and issuing updates for EPEL 6 on 2020-11-30. It is our policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a 'version' of 'el6'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later EPEL version. Thank you for reporting this issue and we are sorry that we were not able to fix it before EPEL 6 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above.
EPEL el6 changed to end-of-life (EOL) status on 2020-11-30. EPEL el6 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of EPEL please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.