Hide Forgot
Created attachment 501672 [details] combined file with all my policies (*.te) Description of problem: When confining users with selinux to the user_u group as per: http://docs.fedoraproject.org/en-US/Fedora/13/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Confining_Users-Confining_Existing_Linux_Users_semanage_login.html Users then can not use the nvidia driver. Yes I know the nvidia driver is propriarty , but the nouveau driver does not work with my hardware, and this is a bug report perhaps against the selinux policy and not nvidia ;) I have attached the various te Ineed to allow configed users to access the nvidia driver. I could only attach one file, so I combined all my policies , hope it is not too confusing. Thank you. Version-Release number of selected component (if applicable): How reproducible: Use selinux to confine users as above. Steps to Reproduce: 1. Confine users 2. 3. Actual results: Users can not user the nvidia driver -> white screen. Expected results: Users can log in and use the desktop normally. Additional info: See attachment.
Looks like some device nodes are mislabelled. I guess udev, systemd and dracut did not catch that somehow. I guess you could try adding a restorecon -R -v -F /dev in rc.local or something similar?
The reason the nvidia device is not usable by user_u is that it is mislabeled. If after the device is created you ran a restorecon on it restorecon /dev/nvid* It would label it correctly and your confined users would be allowed to access it. Similarly your other allow rules in your policy seem to be related to ~/ being mislabeled. restorecon -R -v ~/ Should clean them up.
Daniel Walsh: Thank you for your time and detailed response. restorecon works, temporarily, but the labels are reset when I reboot, and must be restored as root, which is inconvenient. These are the default permissions: crw-rw-rw-. root root system_u:object_r:device_t:s0 nvidia0 crw-rw-rw-. root root system_u:object_r:device_t:s0 nvidiactl And after running restorecon /dev/nvid* crw-rw-rw-. root root system_u:object_r:xserver_misc_device_t:s0 nvidia0 crw-rw-rw-. root root system_u:object_r:xserver_misc_device_t:s0 nvidiactl Again, the problem now is the labels are reset to device_t at reboot.
Fortunately there are some solutions for this issue. 1. You could add "/dev/nvidia*" to /etc/selinux/restorecond.conf and run the restorecond service. # yum install policycoreutils-restorecond # chkconfig level 2345 restorecond on # service restorecond start 2. If you see in the scripts where nvidia* are created, you could add a restorecon right afterwards. 3. Or you can add a local policy module using # grep gnome-session-c /var/log/audit/audit.log | audit2alllow -M nvidiaisbroken # semodule -i nvidiaisbroken.pp
Or run restorecon -R -v /dev/nvidia* in rc.local
We should have a better fix for this in F16
Thank you both. @ Daniel Walsh 1. Thanks, running restorecon in rc.local did not work , which is why I posted back. 2. Good to learn there will be a better solution , that is the point of posting bug reports, and again I appreciate your effort. @ Miroslav Grepl That is what I did, add a local policy, it was easier. Also, in suggesting autdit2allow I think the naming of the local policy, in this case "nvidiaisbroken" makes more sense then a generic term such as local.policy or my.policy.