Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
My best guess is that this is related to vdsm, though I don't know this for certain.
Version-Release number of selected component (if applicable):
vdsm-4.9-112.12.el6_2.x86_64
How reproducible:
100%
Steps to Reproduce:
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:etc_runtime_t:s0 /etc/mtab
--
Put host into downtime
--
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab
restorecon -Rv /etc/mtab
restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0
--
Take host out of downtime
--
ls -Z /etc/mtab
-rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab
restorecon -Rv /etc/mtab
restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0
Actual results:
/etc/mtab selinux context is set incorrectly
Expected results:
Contexts remains correct.
Additional info:
This is related to vmware, but it may be something similar, where RHEV is running something unconfined and not transitioning properly... https://bugzilla.redhat.com/show_bug.cgi?id=513881
sorry, Put host into downtime / Take host out of downtime should read Put host into maintenance / Take host out of maintenance
Comment 3RHEL Program Management
2012-05-21 06:50:09 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Jason, could you attach vdsm.log (of the time coming back form maintainence)?
Do you have NFS storage pool?
Dan, could it be that vdsm's policy somehome make it change /etc/mtab's context (maybe while it mounts nfs/ext3 via sudo)?
In response to Comment 4:
The only NFS storage is the ISO domain, and I also see the entry removed and added back to /etc/mtab when I put the host into maintenance and take it out again.
In response to Comment 5:
The output of ps -eZ | grep vdsm is as follows (with several more processes trailing, all with the same context)
system_u:system_r:virtd_t:s0-s0:c0.c1023 5364 ? 01:25:40 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12008 ? 00:01:37 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12009 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12010 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12011 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12014 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12016 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12018 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12020 ? 00:00:00 vdsm
system_u:system_r:virtd_t:s0-s0:c0.c1023 12022 ? 00:00:00 vdsm
(In reply to comment #8)
> Yes this is a problem.
> is there anyway we could get vdsm to run restorecon /etc/mtab.
When should we run `restorecon /etc/mtab`? right after mount/umount? why? isn't it raceful and prone to accidents (say vdsmd crashes before shooting restorecon).
(In reply to comment #10)
> Are you actually execing the mount command?
Yes. mount and umount.
> If yes we could transition from
> virtd_t to mount_t, and this would solve the problem.
Great, I suppose you can take the bug to selinux-policy-targeted ?
Could you try the following policy to make sure it works.
=================== myvirt.te ===================================
policy_module(myvirt, 1.0)
gen_require(`
type virtd_t;
')
# Run mount in the mount_t domain.
mount_domtrans(virtd_t)
mount_signal(virtd_t)
=================================================================
Create myvirt.te with the above content.
# make -f /usr/share/selinux/devel/Makefile
# semodule -i myvirt.pp
Test and see if /etc/mtab is the correct label and we do not get other AVC's/
Yes, this appears to cause it to work correctly, and with it in place the files contexts remains correct when putting the system in maintenance and taking it out again.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2013-0314.html
Description of problem: My best guess is that this is related to vdsm, though I don't know this for certain. Version-Release number of selected component (if applicable): vdsm-4.9-112.12.el6_2.x86_64 How reproducible: 100% Steps to Reproduce: ls -Z /etc/mtab -rw-r--r--. root root system_u:object_r:etc_runtime_t:s0 /etc/mtab -- Put host into downtime -- ls -Z /etc/mtab -rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab restorecon -Rv /etc/mtab restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0 -- Take host out of downtime -- ls -Z /etc/mtab -rw-r--r--. root root system_u:object_r:system_conf_t:s0 /etc/mtab restorecon -Rv /etc/mtab restorecon reset /etc/mtab context system_u:object_r:system_conf_t:s0->system_u:object_r:etc_runtime_t:s0 Actual results: /etc/mtab selinux context is set incorrectly Expected results: Contexts remains correct. Additional info: This is related to vmware, but it may be something similar, where RHEV is running something unconfined and not transitioning properly... https://bugzilla.redhat.com/show_bug.cgi?id=513881