Created attachment 632909 [details] This is the audit log showing the avc message and the sanlock.log file. Description of problem: I get the following AVC msg when trying to run a VM from the ovirt admin tool: type=AVC msg=audit(1351051834.851:720): avc: denied { read } for pid=979 comm="sanlock" name="8798edc0-dbd2-466d-8be9-1997f63e196f" dev="dm-4" ino=3145737 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:mnt_t:s0 tclass=lnk_file The file it is attempting to read I believe (from the sanlock.log file) is the following: # ls -lZ /rhev/data-center/a8ea368c-bc08-4e10-81e7-c8439bf7bd35/8798edc0-dbd2-466d-8be9-1997f63e196f/images/b029b5a6-9eb3-4a34-ad03-1ac4386e8c7c/71252c8f-68a9-495f-b5a6-4e8e035b56ea.lease -rw-rw----. vdsm kvm system_u:object_r:nfs_t:s0 /rhev/data-center/a8ea368c-bc08-4e10-81e7-c8439bf7bd35/8798edc0-dbd2-466d-8be9-1997f63e196f/images/b029b5a6-9eb3-4a34-ad03-1ac4386e8c7c/71252c8f-68a9-495f-b5a6-4e8e035b56ea.lease The situation is that the VM image file is stored on an NFS file server (in this case, configured using NFSv3). Both the client and the server are fc17. The error occurs when trying to start the VM. The version of oVirt I am using is a recent nightly build (ovirt-engine -> 3.1.0-3.1345126685.git7649eed.fc17). I'd be making a wild guess that the sanlock process doesn't have rights to open some nfs resources. To work around this problem, I can disable selinux enforcement or I can change the /etc/libvirt/qemu.conf file to comment out the line "lock_manager="sanlock". Version-Release number of selected component (if applicable): I am running a nightly build of overt-engine. In particular: ovirt-engine.noarch 3.1.0-3.1345126685.git7649eed.fc17 and its related vdsm: vdsm.x86_64 4.10.0-10.fc17 @updates How reproducible: Fails when I create the first VM after a new installation and invoke "run once" to start it. It does not start. Steps to Reproduce: 1. Install oVirt nightly build on an FC17 system. 2. Using oVirt, create an NFS data center and a cluster. 3. Add a host using the oVirt manager tool - my host is running FC17 minimal install. Attach that host to the NFS data center. 4. Add/attach an NFS Storage domain 5. Attach the iso storage domain and add any isos. 6. Create a VM. 7. Run the VM using the run once menu in the admin tool. Actual results: oVirt management tool returns an error event. Log files on the host indicate an error getting a lock. sanlock.log confirms. Disabled selinux (setenforce 0) and retried the run once operation. This time, I see an AVC log message in /var/log/audit/audit.log but the VM starts up with no errors. Expected results: I expected the VM to run and not fail when starting up. Additional info:
selinux folks: can you please take a look at this? There is a near constant stream of selinux problems with sanlock and wdmd (also see bug 831908).
Could it be that you do not have sanlock_use_nfs=1 set? Could you check if http://gerrit.ovirt.org/8755 solves your woos?
Sorry, I should have included the following: # getsebool -a | grep sanlock sanlock_use_fusefs --> off sanlock_use_nfs --> on sanlock_use_samba --> off virt_use_sanlock --> on # grep -v -e "^#" -e "^$" /etc/libvirt/qemu.conf dynamic_ownership=0 spice_tls=1 spice_tls_x509_cert_dir="/etc/pki/vdsm/libvirt-spice" lock_manager="sanlock" If sanlock_use_nfs is 0/off, then I get an error much earlier when creating the NFS storage domain (and the lock within). I'm aware of 8755. It solved my first problem (sanlock-use_nfs was off). Now that I can create a storage domain, I can't run a VM without turning off either selinux enforcement or turning off locks in libvirt/qemu (comment out the lock_manager-"sanlock" line) when running the VM.
Turn off selinux (setenforce 0) and set lock_manager="sanlock". Take all the avc errors that you find in /var/log/messages and /var/log/audit/audit.log and attach them here. Moving the bug to selinux-policy.
The only avc error I see is the one in the initial report. To repeat, it is: type=AVC msg=audit(1351051834.851:720): avc: denied { read } for pid=979 comm="sanlock" name="8798edc0-dbd2-466d-8be9-1997f63e196f" dev="dm-4" ino=3145737 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:object_r:mnt_t:s0 tclass=lnk_file
Did you set up a link file in /mnt?
No, I didn't add anything in /mnt. ovirt/vdsm nfs mounted the storage directory in /rhev/data-center/...
AFAIK we had a similar issue on RHEL6 where we needed to allow it for virt domains. The problem is the storage directory is mounted in the /rhev/data-center/ and we label these parent dirs as mnt_t. Execute # grep sanlock /var/log/audit/audit.log |audit2allow -M mypol # semodule -i mypol.pp to see if it works. I added it to the policy.
I upgraded my system to the latest nightly build. I no longer get the error when starting a VM on an NFS storage domain. When I tried your commands above, I now get a "nothing to do" message. [root@mech ~]# grep sanlock /var/log/audit/audit.log | audit2allow -M mypol Nothing to do [root@mech ~]# So either the problem was resolved in the most recent nightly build (or two) or yet another reinstallation solved it.
selinux-policy-3.10.0-159.fc17 has been submitted as an update for Fedora 17. https://admin.fedoraproject.org/updates/selinux-policy-3.10.0-159.fc17
Package selinux-policy-3.10.0-159.fc17: * should fix your issue, * was pushed to the Fedora 17 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing selinux-policy-3.10.0-159.fc17' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-17782/selinux-policy-3.10.0-159.fc17 then log in and leave karma (feedback).
selinux-policy-3.10.0-159.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.