I did an install from livecd, configuring software RAID1 on 2 virtual disks. During installation, two AVC denial notifications are shown. The live environment is in permissive mode and the install completes successfully but the AVC denial notifications are shown at the bottom of the screen. The 2 AVC denials are: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md126. SELinux is preventing /usr/sbin/mdadm from ioctl access on the blk_file /dev/md126. Details about the avc denials are attached to the bug. System Used: VM with 2 15G virtual disks Fedora 19 TC5 Desktop Live x86_64
Created attachment 762539 [details] ioctl access denail details
Created attachment 762541 [details] read access denial details
If you run ls -lZ /dev/md126 Does it show them as fixed_disk_device_t?
This is a race condition where the device is created and mdadm touches it before udev can fix the label. 8171089b41052f26fdbbcc9c16b42aaa9c735572 Will allow this access in git.
(In reply to Daniel Walsh from comment #3) > If you run ls -lZ /dev/md126 Does it show them as fixed_disk_device_t? Post-install, it shows up as: brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md126 I'll run another install to see if it's the same while inside the live install environment. I should have put this in the initial description but the selinux versions on the installed system are: selinux-policy-targeted-3.12.1-52.fc19.noarch selinux-policy-3.12.1-52.fc19.noarch
*** Bug 975643 has been marked as a duplicate of this bug. ***
Discussed (as the dupe 975643) at 2013-06-19 freeze exception review meeting: http://meetbot.fedoraproject.org/fedora-blocker-review/2013-06-19/f19final-blocker-review-7.2013-06-19-16.01.log.txt . Accepted as a freeze exception issue: this is close to being a blocker under the 'no AVCs shown during install / first boot' criterion, but only affects live installs to mdraid, so it's easier just to make it a freeze exception issue. We could re-consider blocker status if somehow the fix doesn't get in soon, but we surely expect it to.
I believe it has been fixed.
Not fixed for me in an install based on TC6. I have LVM2 on mdraid, created after initial install, and see avc denial on md127, and subsequent failure to see t he lvm volumes. if I do vgchange -ay post boot, it all appears. The mdraid devices that existed prior to the install are all OK (and also have lvm)
That does not sound like the same bug, then. Can you post your exact AVC? Thanks.
(In reply to Adam Williamson from comment #10) Here it is Additional Information: Source Context system_u:system_r:mdadm_t:s0-s0:c0.c1023 Target Context system_u:object_r:device_t:s0 Target Objects md127 [ blk_file ] Source mdadm Source Path /usr/sbin/mdadm Port <Unknown> Host big.home.wanacat.com Source RPM Packages mdadm-3.2.6-19.fc19.x86_64 Target RPM Packages Policy RPM selinux-policy-3.12.1-54.fc19.noarch Selinux Enabled True Policy Type targeted Enforcing Mode Enforcing Host Name big.home.wanacat.com Platform Linux big.home.wanacat.com 3.9.8-300.fc19.x86_64 #1 SMP Thu Jun 27 19:24:23 UTC 2013 x86_64 x86_64 Alert Count 1 First Seen 2013-07-03 16:04:44 EST Last Seen 2013-07-03 16:04:44 EST Local ID ac464e53-bf58-4c31-84c3-beca874bcb54 Raw Audit Messages type=AVC msg=audit(1372831484.871:27): avc: denied { read } for pid=472 comm="mdadm" name="md127" dev="devtmpfs" ino=15420 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:device_t:s0 tclass=blk_file type=SYSCALL msg=audit(1372831484.871:27): arch=x86_64 syscall=open success=no exit=EACCES a0=7fff0ee2bf0a a1=0 a2=1 a3=1 items=0 ppid=469 pid=472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm=mdadm exe=/usr/sbin/mdadm subj=system_u:system_r:mdadm_t:s0-s0:c0.c1023 key=(null) Hash: mdadm,mdadm_t,device_t,blk_file,read
Also, not sure if this helps (the /dev/md/ directory is different for this device): [darren@big ~]$ ls -l /dev/md* brw-rw----. 1 root disk 9, 0 Jul 3 16:04 /dev/md0 brw-rw----. 1 root disk 9, 1 Jul 3 16:04 /dev/md1 brw-rw----. 1 root disk 9, 127 Jul 3 16:04 /dev/md127 brw-rw----. 1 root disk 9, 2 Jul 3 16:04 /dev/md2 /dev/md: total 0 lrwxrwxrwx. 1 root root 6 Jul 3 16:04 0 -> ../md0 lrwxrwxrwx. 1 root root 6 Jul 3 16:04 1 -> ../md1 lrwxrwxrwx. 1 root root 6 Jul 3 16:04 2 -> ../md2 lrwxrwxrwx. 1 root root 8 Jul 3 16:04 localhost.localdomain:1 -> ../md127
# cat /tmp/log |audit2allow #============= mdadm_t ============== #!!!! This avc is allowed in the current policy allow mdadm_t device_t:blk_file read; http://koji.fedoraproject.org/koji/buildinfo?buildID=430265 Could you open it as a new bug. Then I can switch it to Modify and do an update for it.
(In reply to Miroslav Grepl from comment #13) OK, will do. Actually seems there is something else happening. recreating mdraid (as md3) has made the avc go away, but there is some race I think, similar to above. 50% of time (approx, at least), lvm fails to find ANY LV's, but underpinning md devices appear to have been created, or are being created. md0: detected capacity change from 0 to 2097139712 repeated for all mdX, which I think corresponds to the general availability of a configured md raid device then I see device-mapper: table: 253:0: linear: dm-linear: Device lookup failed device-mapper: ioctl: error adding target to table one set per potential lv. As before, post login, vgchange -ay and it all appears
selinux-policy-3.12.1-59.fc19 (as per bug 975649) did resolve this for me, but I needed to disable, pvscan --cache and re-enable lvmetad before it had the desired effect.