Bug 821887

Summary: RHEV Hypervisors are setting selinux context on /etc/mtab improperly.
Product: Red Hat Enterprise Linux 6 Reporter: Jason Montleon <jmontleo>
Component: selinux-policyAssignee: Miroslav Grepl <mgrepl>
Status: CLOSED ERRATA QA Contact: Michal Trunecka <mtruneck>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.2CC: abaron, bazulay, danken, dwalsh, ebenes, iheim, mmahut, mmalik, mtruneck, ykaul
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard: storage infra
Fixed In Version: selinux-policy-3.7.19-156.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 875801 (view as bug list) Environment:
Last Closed: 2013-02-21 08:35:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 782183, 840699, 875801    

Description Jason Montleon 2012-05-15 16:43:46 UTC
Description of problem:
My best guess is that this is related to vdsm, though I don't know this for certain. 

Version-Release number of selected component (if applicable):
vdsm-4.9-112.12.el6_2.x86_64

How reproducible:
100%

Steps to Reproduce:
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:etc_runtime_t:s0 /etc/mtab
     
--
Put host into downtime
--
     
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab
restorecon -Rv /etc/mtab
restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0
     
--
Take host out of downtime
--
     
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab
restorecon -Rv /etc/mtab
restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0 
  
Actual results:
/etc/mtab selinux context is set incorrectly

Expected results:
Contexts remains correct.

Additional info:
This is related to vmware, but it may be something similar, where RHEV is running something unconfined and not transitioning properly... https://bugzilla.redhat.com/show_bug.cgi?id=513881

Comment 1 Jason Montleon 2012-05-15 16:58:35 UTC
sorry, Put host into downtime / Take host out of downtime should read Put host into maintenance / Take host out of maintenance

Comment 3 RHEL Program Management 2012-05-21 06:50:09 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 Dan Kenigsberg 2012-05-21 12:13:52 UTC
Jason, could you attach vdsm.log (of the time coming back form maintainence)?
Do you have NFS storage pool?

Dan, could it be that vdsm's policy somehome make it change /etc/mtab's context (maybe while it mounts nfs/ext3 via sudo)?

Comment 5 Daniel Walsh 2012-05-21 13:42:50 UTC
What context is vdsm running as?

ps -eZ | grep vdsm

Comment 6 Jason Montleon 2012-05-21 15:16:36 UTC
In response to Comment 4:
The only NFS storage is the ISO domain, and I also see the entry removed and added back to /etc/mtab when I put the host into maintenance and take it out again.

In response to Comment 5:
The output of ps -eZ | grep vdsm is as follows (with several more processes trailing, all with the same context)

system_u:system_r:virtd_t:s0-s0:c0.c1023 5364 ? 01:25:40 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12008 ? 00:01:37 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12009 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12010 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12011 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12014 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12016 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12018 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12020 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12022 ? 00:00:00 vdsm

Comment 8 Daniel Walsh 2012-05-29 19:10:55 UTC
Yes this is a problem. is there anyway we could get vdsm to run restorecon /etc/mtab.

Comment 9 Dan Kenigsberg 2012-05-29 21:47:05 UTC
(In reply to comment #8)
> Yes this is a problem.
> is there anyway we could get vdsm to run restorecon /etc/mtab.

When should we run `restorecon /etc/mtab`? right after mount/umount? why? isn't it raceful and prone to accidents (say vdsmd crashes before shooting restorecon).

Comment 10 Daniel Walsh 2012-05-30 12:51:13 UTC
Are you actually execing the mount command?  If yes we could transition from virtd_t to mount_t, and this would solve the problem.

Comment 11 Dan Kenigsberg 2012-05-30 13:11:29 UTC
(In reply to comment #10)
> Are you actually execing the mount command? 

Yes. mount and umount.

> If yes we could transition from
> virtd_t to mount_t, and this would solve the problem.

Great, I suppose you can take the bug to selinux-policy-targeted ?

Comment 12 Daniel Walsh 2012-05-30 13:33:45 UTC
Could you try the following policy to make sure it works.

=================== myvirt.te ===================================
policy_module(myvirt, 1.0)
gen_require(`
type virtd_t;
')

# Run mount in the mount_t domain.
mount_domtrans(virtd_t)
mount_signal(virtd_t)
=================================================================

Create myvirt.te with the above content.

# make -f /usr/share/selinux/devel/Makefile
# semodule -i myvirt.pp

Test and see if /etc/mtab is the correct label and we do not get other AVC's/

Comment 13 Jason Montleon 2012-05-31 14:31:50 UTC
Yes, this appears to cause it to work correctly, and with it in place the files contexts remains correct when putting the system in maintenance and taking it out again.

Comment 14 Marek Mahut 2012-07-09 13:16:36 UTC
Thank you Dan, where can we expect this in the new selinux-policy? Marek

Comment 15 Miroslav Grepl 2012-07-10 05:55:01 UTC
I am going to do a new RHEL6.4 build these days.

Comment 16 Miroslav Grepl 2012-07-17 08:46:45 UTC
Fixed in selinux-policy-3.7.19-156.el6

Comment 20 errata-xmlrpc 2013-02-21 08:35:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0314.html