Bug 975649 - Intel firmware RAID-1 set shows as read-only on live boot (RAID-0 set does not)
Summary: Intel firmware RAID-1 set shows as read-only on live boot (RAID-0 set does not)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: selinux-policy
Version: 19
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: ---
Assignee: Miroslav Grepl
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: https://fedoraproject.org/wiki/Common...
Depends On:
Blocks: F19-accepted, F19FinalFreezeException 983141
TreeView+ depends on / blocked
 
Reported: 2013-06-19 03:31 UTC by Adam Williamson
Modified: 2013-07-14 03:38 UTC (History)
18 users (show)

Fixed In Version: selinux-policy-3.12.1-63.fc19
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 983141 (view as bug list)
Environment:
Last Closed: 2013-07-14 03:38:31 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
anaconda.log (5.71 KB, text/plain)
2013-06-19 03:31 UTC, Adam Williamson
no flags Details
program.log (17.16 KB, text/plain)
2013-06-19 03:31 UTC, Adam Williamson
no flags Details
/tmp/storage.log (97.17 KB, text/plain)
2013-06-19 03:32 UTC, Adam Williamson
no flags Details
/var/log/messages (163.29 KB, text/plain)
2013-06-19 03:33 UTC, Adam Williamson
no flags Details
journalctl -a output (281.10 KB, text/plain)
2013-06-19 04:16 UTC, Adam Williamson
no flags Details
sealert output (4.68 KB, text/plain)
2013-07-03 21:06 UTC, Jason Farrell
no flags Details

Description Adam Williamson 2013-06-19 03:31:04 UTC
This is odd. If I boot the F19 Final TC5 desktop live - apparently it doesn't matter what way, I've tried USB sticks both litd and dd, and a physical DVD - and run the installer, it fails to see an Intel firmware RAID-1 set. However, I was able to install to a RAID-0 set just fine. blivet claims it's a read-only device.

If I boot the non-live installer (used a physical DVD), it sees the set fine.

Attaching logs. Proposing as a final FE - enough paths work here that it's probably not a blocker, but we should fix it if possible (and if it's not some kind of weird system-specific bug).

Comment 1 Adam Williamson 2013-06-19 03:31:31 UTC
Created attachment 762689 [details]
anaconda.log

Comment 2 Adam Williamson 2013-06-19 03:31:56 UTC
Created attachment 762690 [details]
program.log

Comment 3 Adam Williamson 2013-06-19 03:32:24 UTC
Created attachment 762691 [details]
/tmp/storage.log

Comment 4 Adam Williamson 2013-06-19 03:33:50 UTC
Created attachment 762692 [details]
/var/log/messages

Comment 5 Adam Williamson 2013-06-19 04:14:57 UTC
Output from mdadm --detail /dev/md126 :

[root@localhost liveuser]# mdadm --detail /dev/md126 
/dev/md126:
      Container : /dev/md/imsm0, member 0
     Raid Level : raid1
     Array Size : 488383488 (465.76 GiB 500.10 GB)
  Used Dev Size : 488383620 (465.76 GiB 500.10 GB)
   Raid Devices : 2
  Total Devices : 2

          State : clean, resyncing (PENDING) 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


           UUID : 0f6abe83:5f73df92:d7dbe9cd:cb8b946b
    Number   Major   Minor   RaidDevice State
       1       8        0        0      active sync   /dev/sda
       0       8       16        1      active sync   /dev/sdb

gnome-disks also sees the set as 'read-only', so this doesn't look like an anaconda issue. Let's try mdadm.

Comment 6 Adam Williamson 2013-06-19 04:16:54 UTC
Created attachment 762696 [details]
journalctl -a output

Comment 7 Jes Sorensen 2013-06-19 08:14:39 UTC
Adam,

I need the following info please:
/proc/mdstat
ps -aux | grep dmon
/usr/lib/systemd/system/mdmon@.service - present?
type -a mdmon

Thanks,
Jes

Comment 8 Tim Flink 2013-06-22 00:58:59 UTC
I was able to reproduce the same behavior witn F19 final TC6 Desktop Live x86_64 on a machine with a 2 disk Intel FW RAID1 array.

(In reply to Jes Sorensen from comment #7)
> I need the following info please:
> /proc/mdstat

[root@core2 ~]# cat /proc/mdstat 
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md127 : inactive sda[0](S)
      2931 blocks super external:imsm
       
unused devices: <none>

> ps -aux | grep dmon

[root@core2 ~]# ps -aux | grep dmon
root      2615  0.0  0.0 112636   960 pts/1    S+   20:56   0:00 grep --color=auto dmon


> /usr/lib/systemd/system/mdmon@.service - present?

[root@core2 ~]# ll /lib/systemd/system/mdmon*
-rw-r--r--. 1 root root 330 Apr 24 04:42 /lib/systemd/system/mdmonitor.service
-rw-r--r--. 1 root root 506 Apr 24 04:42 /lib/systemd/system/mdmon@.service


> type -a mdmon

[root@core2 ~]# type -a mdmon
mdmon is /sbin/mdmon
mdmon is /usr/sbin/mdmon

Comment 9 Jes Sorensen 2013-06-26 07:01:32 UTC
Ok, so something is killing mdmon during the startup of the livecd.

We went through the painful hoops of having mdmon launched via systemd with
the last release, and all the pieces are in place - presuming that the
/lib/systemd/... files you list also match with /usr/lib/systemd/.... ?

We need to find out why mdmon is getting killed on the livecd but not on
normal startup.

Jes

Comment 10 Jason Farrell 2013-06-27 17:47:33 UTC
Same read-only partially-assembled intel fakeraid array bug here too, after installing F19 RC2. Worked fine in F18.

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] 
md126 : active (read-only) raid1 sdb[1] sdc[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive sdc[1](S) sdb[0](S)
      6056 blocks super external:imsm

[root@localhost ~]# ps aux|grep dmon
root      2884  0.0  0.0  15016 10916 ?        SLsl 12:22   0:00 @sbin/mdmon --foreground md127
[root@localhost ~]# ll /usr/lib/systemd/system/mdmon*
-rw-r--r--. 1 root root 330 Apr 24 04:42 /usr/lib/systemd/system/mdmonitor.service
-rw-r--r--. 1 root root 506 Apr 24 04:42 /usr/lib/systemd/system/mdmon@.service
[root@localhost ~]# type -a mdmon
mdmon is /usr/sbin/mdmon


[root@localhost ~]# ll /dev/md*
brw-rw----. 1 root disk   9, 126 Jun 27 12:22 /dev/md126
brw-rw----. 1 root disk 259,   0 Jun 27 12:22 /dev/md126p1
brw-rw----. 1 root disk 259,   1 Jun 27 12:22 /dev/md126p2
brw-rw----. 1 root disk   9, 127 Jun 27 12:15 /dev/md127

/dev/md:
total 0
lrwxrwxrwx. 1 root root  8 Jun 27 12:15 imsm0 -> ../md127
lrwxrwxrwx. 1 root root  8 Jun 27 12:22 SeaMirror_0 -> ../md126
lrwxrwxrwx. 1 root root 10 Jun 27 12:22 SeaMirror_0p1 -> ../md126p1

Comment 11 Jason Farrell 2013-06-27 17:50:58 UTC
I am able to manually re-assemble and make writable my fakeraid post-boot, but it'd be nice if it JustWorked like in F18.

## before
[root@localhost ~]# pvs;vgs;lvs
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/sda2  vg_ivy  lvm2 a--  178.38g     0 
  /dev/sdc2  vg_raid lvm2 a--    1.36t 53.01g
  VG      #PV #LV #SN Attr   VSize   VFree 
  vg_ivy    1   3   0 wz--n- 178.38g     0 
  vg_raid   1   3   0 wz--n-   1.36t 53.01g
  LV      VG      Attr      LSize   Pool Origin Data%  Move Log Copy%  Convert
  home    vg_ivy  -wi-ao--- 154.38g                                           
  rootf18 vg_ivy  -wi-ao---  16.00g                                           
  rootf19 vg_ivy  -wi-ao---   8.00g                                           
  butter  vg_raid -wi-a----  64.00g                                           
  data    vg_raid -wi-a----   1.00t                                           
  vm      vg_raid -wi-a---- 256.00g


## short post-boot script workaround:
vgchange -an vg_raid
mdadm --stop /dev/md126
mdadm -As
vgchange -ay
mount /mnt/raid/data
mount /mnt/raid/vm

## after
[root@localhost ~]# pvs
  PV           VG      Fmt  Attr PSize   PFree 
  /dev/md126p2 vg_raid lvm2 a--    1.36t 53.01g
  /dev/sda2    vg_ivy  lvm2 a--  178.38g     0 
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb[1] sdc[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive sdc[1](S) sdb[0](S)
      6056 blocks super external:imsm
       
unused devices: <none>
[root@localhost ~]# mount|grep raid
/dev/mapper/vg_raid-data on /mnt/raid/data type ext4 (rw,relatime,seclabel,data=ordered)
/dev/mapper/vg_raid-vm on /mnt/raid/vm type ext4 (rw,relatime,seclabel,data=ordered)

Comment 12 Jes Sorensen 2013-06-27 18:13:22 UTC
Jason,

This is post install? The previous reports in this bug were against the
LiveCD image image, not post install.

If it fails post install, I really want to know about it.

Nothing has changed in mdadm between F18 and F19 (at least not that I am
aware of), so it's likely caused by changes in the surroundings :(

In addition, is your raid used for / or only for data storage?

Jes

Comment 13 Adam Williamson 2013-06-27 18:22:36 UTC
Tim: I guess we should check post-DVD-install for our test boxes.

Comment 14 Tim Flink 2013-06-27 18:40:59 UTC
(In reply to Adam Williamson from comment #13)
> Tim: I guess we should check post-DVD-install for our test boxes.

Works fine for me on an intel RAID1 box post install for F19 RC2. I'm able to write to the disks without issue but this install is with standard partitions.

I'll try with LVM on RAID1 once my RC3 DVDs finish downloading

Comment 15 Jason Farrell 2013-06-27 19:09:09 UTC
(In reply to Jes Sorensen from comment #12)
> This is post install? The previous reports in this bug were against the
> LiveCD image image, not post install.

post-install, yes, but I just booted up the liveusb and the problem (and post-boot workaround) exists there too:

[liveuser@localhost ~]$ cat /proc/mdstat
Personalities : 
md127 : inactive sdb[0](S)
      3028 blocks super external:imsm
       
unused devices: <none>
[liveuser@localhost ~]$ ll /dev/md*
brw-rw----. 1 root disk 9, 127 Jun 27 10:51 /dev/md127

/dev/md:
total 0
lrwxrwxrwx. 1 root root 8 Jun 27 10:51 imsm0 -> ../md127
[liveuser@localhost ~]$ sudo pvs
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/sda2  vg_ivy  lvm2 a--  178.38g     0 
  /dev/sdc2  vg_raid lvm2 a--    1.36t 53.01g


> In addition, is your raid used for / or only for data storage?

Only for bulk storage.

Comment 16 Adam Williamson 2013-06-27 21:22:37 UTC
I don't see the problem after a minimal net install of Final RC3. though I suppose it'd be interesting to test it from a *live* install to some other disk, as is Jason's case.

Comment 17 Jason Farrell 2013-07-01 17:37:48 UTC
Some more info on my post-install (VS live) imsm raid1 partial assembly problem:

Still isn't assembling during boot (after the last batch of updates + kernal / dracut), but during manual assembly I noticed an odd SELinux error applicable to md126* members, but not to the md127 (imsm container):

### SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md126p2.

[root@ivy ~]# ll -Z /dev/md*
brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md126
brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md126p1
brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md126p2
brw-rw----. root disk system_u:object_r:fixed_disk_device_t:s0 /dev/md127

/dev/md:
lrwxrwxrwx. root root system_u:object_r:device_t:s0    imsm0 -> ../md127
lrwxrwxrwx. root root system_u:object_r:device_t:s0    SeaMirror_0 -> ../md126
lrwxrwxrwx. root root system_u:object_r:device_t:s0    SeaMirror_0p1 -> ../md126p1


Doesn't make much sense to me, and booting with "enforcing=0" (followed by a relabel) didn't help.

Comment 18 Jes Sorensen 2013-07-03 08:36:16 UTC
I ran some testing on this on rawhide, which shows exactly the same problems.
In fact, this is not an mdadm/md-raid problem, but rather looks like a problem
with bizarre changes in selinux rules that make absolutely no sense to me.

I created an IMSM container and two raid volumes on it, one raid0, one raid1.
The raid0 comes up ok, as reported here, the raid1 does not. The problem is
that mdmon is not being launched correctly during boot, due to selinux policy.

selinux labels and rules are all black magic, so to be honest, I am not sure
what needs to be changed and where.

Looking at journalctl output, I see this from during the boot process:

Jul 03 10:05:08 noisybay.lan kernel: type=1400 audit(1372838708.211:10): avc:  denied  { execute } for  pid=333 comm="mdadm" name="systemctl" dev="sda3" ino=1840635 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1
Jul 03 10:05:08 noisybay.lan kernel: type=1400 audit(1372838708.237:11): avc:  denied  { execute_no_trans } for  pid=338 comm="mdadm" path="/usr/sbin/mdmon" dev="sda3" ino=1844075 scontext=system_u:system_r:mdad
Jul 03 10:05:08 noisybay.lan kernel: type=1400 audit(1372838708.262:12): avc:  denied  { execute_no_trans } for  pid=338 comm="mdadm" path="/usr/sbin/mdmon" dev="sda3" ino=1844075 scontext=system_u:system_r:mdad

If I run 'systemctl start mdmon' post boot, mdmon is being
launched, but I still get a bunch of errors in the log:

Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: dbus avc(node=noisybay.lan type=AVC msg=audit(1372839351.583:456): avc:  denied  { read } for  pid=425 comm="mdadm" name="md127" dev="devtmpfs" ino=21540 sconte
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=AVC msg=audit(1372839351.583:456): avc:  denied  { read } for  pid=425 comm="mdadm" name="md127" dev="devt
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=AVC msg=audit(1372839351.583:456): avc:  denied  { read } for  pid=425 comm="mdadm" name="md12
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=SYSCALL msg=audit(1372839351.583:456): arch=c000003e syscall=2 success=no exit=-13 a0=b49800 a1=0 a2=0 a3=
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=SYSCALL msg=audit(1372839351.583:456): arch=c000003e syscall=2 success=no exit=-13 a0=b49800 a
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=EOE msg=audit(1372839351.583:456):
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=EOE msg=audit(1372839351.583:456):
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: analyze_avc() avc=scontext=system_u:system_r:mdadm_t:s0 tcontext=system_u:object_r:device_t:s0 access=['read'] tclass=blk_file tpath=md127
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: dbus avc(node=noisybay.lan type=AVC msg=audit(1372839351.585:457): avc:  denied  { read } for  pid=1377 comm="mdadm" name="md127" dev="devtmpfs" ino=21540 scont
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=AVC msg=audit(1372839351.585:457): avc:  denied  { read } for  pid=1377 comm="mdadm" name="md127" dev="dev
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=AVC msg=audit(1372839351.585:457): avc:  denied  { read } for  pid=1377 comm="mdadm" name="md1
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=SYSCALL msg=audit(1372839351.585:457): arch=c000003e syscall=2 success=no exit=-13 a0=7fff23ad3f1c a1=0 a2
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=SYSCALL msg=audit(1372839351.585:457): arch=c000003e syscall=2 success=no exit=-13 a0=7fff23ad
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.feed() got node=noisybay.lan type=EOE msg=audit(1372839351.585:457):
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: AuditRecordReceiver.add_record_to_cache(): node=noisybay.lan type=EOE msg=audit(1372839351.585:457):
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: lookup_signature: found 1 matches with scores 1.00
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: signature found in database
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: sending alert to all clients
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md127. For complete SELinux messages. run sealert -l dbd1a444-e2c6-4ad0-b125-3929950ab656

and

Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: analyze_avc() avc=scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:device_t:s0 access=['read'] tclass=blk_file tpath=md127
Jul 03 10:15:52 noisybay.lan python[1381]: [120B blob data]
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: lookup_signature: found 1 matches with scores 1.00
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: signature found in database
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: sending alert to all clients
Jul 03 10:15:52 noisybay.lan setroubleshoot[1381]: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md127. For complete SELinux messages. run sealert -l d74a120e-7e43-4758-9877-1c844b526502

and a pile more.

The output from the first sealert is this:

SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md127.

*****  Plugin device (91.4 confidence) suggests  *****************************

If you want to allow mdadm to have read access on the md127 blk_file
Then you need to change the label on md127 to a type of a similar device.
Do
# semanage fcontext -a -t SIMILAR_TYPE 'md127'
# restorecon -v 'md127'

*****  Plugin catchall (9.59 confidence) suggests  ***************************

If you believe that mdadm should be allowed read access on the md127 blk_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep mdadm /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp


Additional Information:
Source Context                system_u:system_r:mdadm_t:s0
Target Context                system_u:object_r:device_t:s0
Target Objects                md127 [ blk_file ]
Source                        mdadm
Source Path                   /usr/sbin/mdadm
Port                          <Unknown>
Host                          noisybay.lan
Source RPM Packages           mdadm-3.2.6-19.fc20.x86_64
Target RPM Packages           
Policy RPM                    selinux-policy-3.12.1-56.fc20.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     noisybay.lan
Platform                      Linux noisybay.lan 3.10.0-0.rc7.git0.2.fc20.x86_64
                              #1 SMP Tue Jun 25 11:53:19 UTC 2013 x86_64 x86_64
Alert Count                   14
First Seen                    2013-07-02 18:26:48 CEST
Last Seen                     2013-07-03 10:15:51 CEST
Local ID                      dbd1a444-e2c6-4ad0-b125-3929950ab656

Raw Audit Messages
type=AVC msg=audit(1372839351.583:456): avc:  denied  { read } for  pid=425 comm="mdadm" name="md127" dev="devtmpfs" ino=21540 scontext=system_u:system_r:mdadm_t:s0 tcontext=system_u:object_r:device_t:s0 tclass=blk_file


type=SYSCALL msg=audit(1372839351.583:456): arch=x86_64 syscall=open success=no exit=EACCES a0=b49800 a1=0 a2=0 a3=22 items=0 ppid=1 pid=425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=4294967295 tty=(none) comm=mdadm exe=/usr/sbin/mdadm subj=system_u:system_r:mdadm_t:s0 key=(null)

Hash: mdadm,mdadm_t,device_t,blk_file,read



This is clearly not a mdadm problem, but a problem with selinux, reassigning.

Comment 19 Michael S. 2013-07-03 08:50:22 UTC
It think the issue is caused by the fact that /dev/md127 is now a blk_file, and not a file. So 

$ sesearch -s mdadm_t -t device_t -c file  --allow     
Found 1 semantic av rules:
   allow mdadm_t device_t : file { ioctl read getattr lock open } ; 

Allowed

$ sesearch -s mdadm_t -t device_t -c blk_file  --allow 
$

Not allowed

I guess we need to add something like storage_manage_fixed_disk(mdadm_t) or similar ? Or storage_raw_rw_fixed_disk(mdadm_t)

Comment 20 Miroslav Grepl 2013-07-03 11:36:05 UTC
We have fixes.

http://koji.fedoraproject.org/koji/buildinfo?buildID=430265

going to do a new update today.

Comment 21 Adam Williamson 2013-07-03 15:48:46 UTC
mgrepl: bah, if we'd worked out earlier that this was the same(?) selinux policy problem as that other RAID bug we could've fixed it for Final :( foo.

Comment 22 Fedora Update System 2013-07-03 19:49:17 UTC
selinux-policy-3.12.1-59.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/selinux-policy-3.12.1-59.fc19

Comment 23 Jason Farrell 2013-07-03 21:04:11 UTC
The above selinux update may have fixed the intel raid1 problem for live, but it doesn't appear to be applicable for my post-install problem, as it persists after installing it (+targeted), rebooting, and after throwing a relabel in for good measure, and rebooting.

== I no longer get these (old) SElinux errors at boot with the new policy:
Jul  1 13:28:49 ivy setroubleshoot: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md126. For complete SELinux messages. run sealert -l 375ed3d3-027c-4668-91c2-32f228fdce80
Jul  1 13:28:49 ivy setroubleshoot: SELinux is preventing /usr/sbin/mdadm from read access on the blk_file md126p2. For complete SELinux messages. run sealert -l b0764ba0-8b65-4e17-813b-e81a7cbb0764

== But I still do get these:
Jul  3 16:23:21 ivy setroubleshoot: SELinux is preventing /usr/sbin/mdadm from execute access on the file /usr/bin/systemctl. For complete SELinux messages. run sealert -l 561ade12-5b1f-49e2-bf04-7c4fedabe758
Jul  3 16:23:21 ivy setroubleshoot: SELinux is preventing /usr/sbin/mdadm from execute_no_trans access on the file /usr/sbin/mdmon. For complete SELinux messages. run sealert -l dec6599a-955b-4a1b-9437-c3cecf1ef041

Comment 24 Jason Farrell 2013-07-03 21:06:47 UTC
Created attachment 768483 [details]
sealert output

Comment 25 Michael S. 2013-07-03 22:13:17 UTC
Indeed, that's blocked :
$ sesearch -s mdadm_t -t systemd_systemctl_exec_t  -c file  --allow 
$


However, the 2nd one is weird, it should be allowed :

$ sesearch -s mdadm_t -t mdadm_exec_t  -c file  --allow             
Found 1 semantic av rules:
   allow mdadm_t mdadm_exec_t : file { ioctl read getattr lock execute entrypoint open } ; 
$

Comment 26 Miroslav Grepl 2013-07-04 06:23:06 UTC
(In reply to Adam Williamson from comment #21)
> mgrepl: bah, if we'd worked out earlier that this was the same(?) selinux
> policy problem as that other RAID bug we could've fixed it for Final :( foo.

Basically we see new and new issues related to this bug :( And the first issue was different as well as the fix.

Comment 27 Jes Sorensen 2013-07-04 11:06:04 UTC
Would you mind pushing the fix into rawhide while you're at it? We have the
same problem in rawhide.

Thanks,
Jes

Comment 28 Fedora Update System 2013-07-05 02:12:58 UTC
Package selinux-policy-3.12.1-59.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing selinux-policy-3.12.1-59.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-12373/selinux-policy-3.12.1-59.fc19
then log in and leave karma (feedback).

Comment 29 Jes Sorensen 2013-07-05 07:49:20 UTC
No go, still doesn't solve the problems - mdmon still doesn't get launched
during boot :(

@Michael Scherer, not sure how it was a surprise in comment #19 that
/dev/md<X> is now a blk file. /dev/md<X> has always been device nods.

[root@noisybay ~]# rpm -q selinux-policy selinux-policy-targeted
selinux-policy-3.12.1-59.fc19.noarch
selinux-policy-targeted-3.12.1-59.fc19.noarch

Jes

Jul 05 09:43:26 noisybay.lan kernel: md: bind<sdd>
Jul 05 09:43:26 noisybay.lan kernel: md: bind<sdc>
Jul 05 09:43:26 noisybay.lan kernel: md: raid1 personality registered for level 
1
Jul 05 09:43:26 noisybay.lan kernel: md/raid1:md125: active with 2 out of 2 mirr
ors
Jul 05 09:43:26 noisybay.lan kernel: md125: detected capacity change from 0 to 1
07374182400
Jul 05 09:43:26 noisybay.lan kernel: type=1400 audit(1373010206.112:4): avc:  de
nied  { execute } for  pid=359 comm="mdadm" name="systemctl" dev="sdb3" ino=1971
880 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r
:systemd_systemctl_exec_t:s0 tclass=file
Jul 05 09:43:26 noisybay.lan kernel:  md125:
Jul 05 09:43:26 noisybay.lan kernel: type=1400 audit(1373010206.140:5): avc:  de
nied  { execute } for  pid=359 comm="mdadm" name="systemctl" dev="sdb3" ino=1971
880 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r
:systemd_systemctl_exec_t:s0 tclass=file
Jul 05 09:43:26 noisybay.lan kernel: type=1400 audit(1373010206.165:6): avc:  de
nied  { execute_no_trans } for  pid=362 comm="mdadm" path="/usr/sbin/mdmon" dev=
"sdb3" ino=1973336 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=sy
stem_u:object_r:mdadm_exec_t:s0 tclass=file
Jul 05 09:43:26 noisybay.lan kernel: type=1400 audit(1373010206.191:7): avc:  de
nied  { execute_no_trans } for  pid=362 comm="mdadm" path="/usr/sbin/mdmon" dev=
"sdb3" ino=1973336 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=sy
stem_u:object_r:mdadm_exec_t:s0 tclass=file

Comment 30 Darren Steven 2013-07-06 06:33:41 UTC
Hi, sorry for what may appear to be an off-bug comment, but I see a number of similar bugs as I try to solve my issue, and see some common things that makse sense in the context of my work-around.

I saw similar issues, but post install, and posted against #975495. I have mdraid though. I think that there's some timing issues involved, triggering a suite of similar problems. My LVM vg was not being detected (mostly), and I occasionally (not always) see the avc above. My workaround was focused on lvm. I did the following:

- disabled lvmetad
- created a new service that has a short sleep, then does a vgchange -ay. This triggers off the correct rules, and is Before local-fs.target and after lvm2-monitor (circular dependency triggered, but it works by giving lvm a chance for mdraid and udev to finish properly I think)
- I also installed the udisks package ( a hint in another issue seemed worth trying)

My gut says there's a bunch of timing/race conditions that impact nested devices. Why is live CD different to the installer? I bet the timing of events is different during startup, and the examination of the raid set happens just a little later, after things are done initialising.

Once again, sorry if this is of-target, but I thought it might be useful observations

Comment 31 Jes Sorensen 2013-07-06 08:35:37 UTC
Darren,

This sounds very related. There may be multiple post install issues, the
problem I referred to above is also post install, but it is clearly an
selinux problem.

Once we get that sorted, we should look at whether the timing issues still
remaining - I definitely won't rule out there may be such issues too.

Jes

Comment 32 Fedora Update System 2013-07-07 01:32:10 UTC
selinux-policy-3.12.1-59.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 33 Jes Sorensen 2013-07-08 06:55:12 UTC
Reopening, as posted above, the -59 release does *not* resolve this problem.

Comment 34 Miroslav Grepl 2013-07-08 07:12:04 UTC
(In reply to Jes Sorensen from comment #33)
> Reopening, as posted above, the -59 release does *not* resolve this problem.

Yes, it does not. Because this bug is a part of the update which fixes the first issue.

Comment 35 Jes Sorensen 2013-07-08 07:17:55 UTC
(In reply to Miroslav Grepl from comment #34)
> (In reply to Jes Sorensen from comment #33)
> > Reopening, as posted above, the -59 release does *not* resolve this problem.
> 
> Yes, it does not. Because this bug is a part of the update which fixes the
> first issue.

Sorry I don't follow here "yes" or "not"? 

There was a large number of avc errors in the logs which critically hamper
the operation of mdadm and raids in Fedora 19. We really need them all resolved
asap, until then IMSM raids are unusable :(

Thanks,
Jes

Comment 36 Miroslav Grepl 2013-07-08 08:49:38 UTC
Jes,
could you please test the following local policy

# cat mypol.te
policy_module(mypol,1.0)

require{
 type mdadm_t;
 type mdadm_exec_t;
}

can_exec(mdadm_t, mdadm_exec_t)
systemd_exec_systemctl(mdadm_t)

and run

# make -f /usr/share/selinux/devel/Makefile mypol.pp
# semodule -i mypol.pp

to see if it finally works.

Comment 37 Jes Sorensen 2013-07-08 08:58:09 UTC
Miroslav,

What happens if I do this, will it get installed permanently? Ie. it needs
to survive boot - in addition, how do I remove this modification afterwards
to get back to a 'standard' setting, when I want to upgrade to the fixed
package afterwards?

Second, I spot 'mdadm_t' in there - what about mdmon execution?

Thanks,
Jes

Comment 38 Miroslav Grepl 2013-07-08 09:12:50 UTC
You can remove it using

# semodule -r mypol


Ok and what does

# ps -eZ |grep mdmon

Comment 39 Jes Sorensen 2013-07-08 09:30:32 UTC
(In reply to Miroslav Grepl from comment #38)
> You can remove it using
> 
> # semodule -r mypol
> 
> 
> Ok and what does
> 
> # ps -eZ |grep mdmon

At what stage do you want this? mdmon fails to launch and exits straight
away during the boot process, so I cannot get ps output for it in this stage.
I can get ps output if I launch it manually post boot, but I am not sure that
is of any value?

Jes

Comment 40 Miroslav Grepl 2013-07-08 10:26:25 UTC
Then boot with permissive mode and run

# dmesg |grep avc

# ausearch -m avc,user_avc -ts recent

Did you test a local policy?

Comment 41 Jes Sorensen 2013-07-08 14:19:46 UTC
[    9.208904] type=1400 audit(1373293029.700:4): avc:  denied  { connectto } for  pid=294 comm="mdadm" path="/run/mdadm/md127.sock" scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tclass=unix_stream_socket
[    9.234353] type=1400 audit(1373293029.725:5): avc:  denied  { connectto } for  pid=294 comm="mdadm" path="/run/mdadm/md127.sock" scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tclass=unix_stream_socket

[root@noisybay ~]# ps -aux|grep dmon
root       1015  0.0  0.0   9044   548 pts/0    S+   16:18   0:00 grep --color=auto dmon
[root@noisybay ~]# 
[root@noisybay ~]# ausearch -m avc,user_avc -ts recent
----
time->Mon Jul  8 16:15:18 2013
type=USER_AVC msg=audit(1373292918.654:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received policyload notice (seqno=2)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
[root@noisybay ~]# 

So still no mdmon getting launched, probably due to the denial of access to
/run/mdadm/<foo>

Jes

Comment 42 Daniel Walsh 2013-07-08 18:12:41 UTC
363c407ef8e77a85a8309ef710fbed2790ba3261 allows this in git.

Comment 43 Jes Sorensen 2013-07-08 19:17:29 UTC
Which git?

[jes@ultrasam linux-2.6]$ git show 363c407ef8e77a85a8309ef710fbed2790ba3261
fatal: bad object 363c407ef8e77a85a8309ef710fbed2790ba3261
[jes@ultrasam mdadm-nbrown]$ git show 363c407ef8e77a85a8309ef710fbed2790ba3261
fatal: bad object 363c407ef8e77a85a8309ef710fbed2790ba3261

?

Comment 44 Miroslav Grepl 2013-07-08 19:47:41 UTC
https://git.fedorahosted.org/git/selinux-policy.git

Comment 45 Darren Steven 2013-07-08 23:52:40 UTC
FWIW, this update has resolved my issue with lvm on mdraid, but not immediately. Something got borked while there was a problem, so I had to disable lvmetad in lvm.conf, run pvcreate--cache (as per doc in lvm.conf) and re-enable, then next reboot it was perfect. I've done 5 or 6 test reboots, and each worked (it was hit and miss before).

Comment 46 Jes Sorensen 2013-07-09 11:53:23 UTC
Miroslav,

Is there anything you want me to test for this, or are you good with what is in
that git tree? I can run a test on an rpm if you wish.

Cheers,
Jes

Comment 47 Miroslav Grepl 2013-07-09 11:58:56 UTC
Yes, I did a new build with fixes but there is a problem. So I am rebuilding it and will let you know.

Comment 48 Miroslav Grepl 2013-07-09 13:37:45 UTC
A new build is done.

http://koji.fedoraproject.org/koji/buildinfo?buildID=432279

Comment 49 Jes Sorensen 2013-07-09 13:50:30 UTC
Not quite there yet :(

[root@noisybay ~]# rpm -q selinux-policy selinux-policy-targeted
selinux-policy-3.12.1-60.fc19.noarch
selinux-policy-targeted-3.12.1-60.fc19.noarch
[root@noisybay ~]# dmesg | grep avc
[    9.664325] type=1400 audit(1373377662.154:4): avc:  denied  { execute } for  pid=342 comm="mdadm" name="systemctl" dev="sdb3" ino=1971880 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:systemd_systemctl_exec_t:s0 tclass=file
[    9.664384] type=1400 audit(1373377662.154:5): avc:  denied  { execute } for  pid=342 comm="mdadm" name="systemctl" dev="sdb3" ino=1971880 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:systemd_systemctl_exec_t:s0 tclass=file
[    9.837188] type=1400 audit(1373377662.327:6): avc:  denied  { read } for  pid=297 comm="mdadm" name="kvm" dev="devtmpfs" ino=11711 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:kvm_device_t:s0 tclass=chr_file
[    9.866017] type=1400 audit(1373377662.356:7): avc:  denied  { read } for  pid=299 comm="mdadm" name="kvm" dev="devtmpfs" ino=11711 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:kvm_device_t:s0 tclass=chr_file
[    9.894978] type=1400 audit(1373377662.385:8): avc:  denied  { read } for  pid=296 comm="mdadm" name="kvm" dev="devtmpfs" ino=11711 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:kvm_device_t:s0 tclass=chr_file

Comment 50 Miroslav Grepl 2013-07-09 14:11:34 UTC
Going to build selinux-policy-targeted-3.12.1-61.fc19.noarch

Comment 51 Miroslav Grepl 2013-07-09 14:43:27 UTC
 selinux-policy-targeted-3.12.1-61.fc19.noarch is available.

Comment 52 Jason Farrell 2013-07-09 15:09:06 UTC
Getting closer - I no longer get any avc denials for md* with selinux-policy-targeted-3.12.1-61.fc19, and, my imsm raid1 is assembled and active instead of (read-only), however, the LVM problem remains. I still have to manually vgchange off/on and start/stop mdadm in order for the raid PV (md126p2) to be used, instead of only a single raw raid mirror component (sd[bc]2).

##### at boot:
[root@ivy ~]# cat /proc/mdstat
Personalities : [raid1] 
md126 : active raid1 sdb[1] sdc[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      [>....................]  resync =  0.4% (9212928/1953511556) finish=263.3min speed=123028K/sec
      
md127 : inactive sdc[1](S) sdb[0](S)
      6056 blocks super external:imsm
       
unused devices: <none>
[root@ivy ~]# pvs
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/sda2  vg_ivy  lvm2 a--  178.38g     0 
  /dev/sdc2  vg_raid lvm2 a--    1.36t 53.01g
[root@ivy ~]# grep md12 /var/log/messages
Jul  9 10:57:16 ivy kernel: [    2.150457] md/raid1:md126: not clean -- starting background reconstruction
Jul  9 10:57:16 ivy kernel: [    2.150458] md/raid1:md126: active with 2 out of 2 mirrors
Jul  9 10:57:16 ivy kernel: [    2.150476] md126: detected capacity change from 0 to 2000395698176
Jul  9 10:57:16 ivy kernel: [    2.151162]  md126: p1 p2
Jul  9 10:57:16 ivy kernel: [    2.210854] md: md126 switched to read-write mode.
Jul  9 10:57:16 ivy kernel: [    2.210918] md: resync of RAID array md126
Jul  9 10:57:33 ivy udisksd[1671]: Error creating watch for file /sys/devices/virtual/block/md127/md/sync_action: No such file or directory (g-file-error-quark, 4)
Jul  9 10:57:33 ivy udisksd[1671]: Error creating watch for file /sys/devices/virtual/block/md127/md/degraded: No such file or directory (g-file-error-quark, 4)

##### fix:
vgchange -an vg_raid
mdadm --stop /dev/md126
mdadm -As
vgchange -ay
mount /mnt/raid/data

##### after:
[root@ivy ~]# pvs
  PV           VG      Fmt  Attr PSize   PFree 
  /dev/md126p2 vg_raid lvm2 a--    1.36t 53.01g
  /dev/sda2    vg_ivy  lvm2 a--  178.38g     0

Comment 53 Miroslav Grepl 2013-07-09 15:11:29 UTC
And does it work in permissive mode? Or it is SELinux issue no longer.

Comment 54 Jason Farrell 2013-07-09 15:15:48 UTC
(In reply to Miroslav Grepl from comment #53)
> And does it work in permissive mode? Or it is SELinux issue no longer.

I tried booting with "enforcing=0" a while back with a similar result, so it appears the selinux component of this bug is fixed. As previous commenters have noted, there's might be a race condition somewhere at boot for mdadm+lvm, for which a new bug should be opened(?).

Comment 55 Jes Sorensen 2013-07-09 16:11:57 UTC
I am seeing similar to Jason - no more avc errors in dmesg, but mdmon is
still not being launched, and the raid1 array stays read-only. If I manually
as root, post boot, launch mdmon using systemctl, it comes up correctly.

Note that mdmon is launched by mdadm during RAID assembly, by calling systemctl,
so I am running the exact same command, just from the command line.

If I boot with enforcing=0 it comes up correctly, which was also the case with
the previous versions of selinux-policy, including the version shipping with F19.

Jes

Comment 56 Jes Sorensen 2013-07-10 10:56:40 UTC
I saw there was a -62 build in the system, so I tried that out too.
Not sure if it was meant to address this issue or not, but it provided the
same negative result :(

Jes

Comment 57 Miroslav Grepl 2013-07-10 11:04:55 UTC
#============= mdadm_t ==============

#!!!! This avc is allowed in the current policy
allow mdadm_t kvm_device_t:chr_file read;

#!!!! This avc is allowed in the current policy
allow mdadm_t systemd_systemctl_exec_t:file execute

Try to execute

# semodule -DB

boot with permissive and

# dmesg |grep avc
# ausearch -m avc,user_avc -ts recent

Comment 58 Jes Sorensen 2013-07-10 12:09:33 UTC
after semodule -DB and with enforcing on:

[root@noisybay ~]# dmesg| grep avc
[    7.257589] type=1400 audit(1373455514.968:4): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[    7.283332] type=1400 audit(1373455514.994:5): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.312828] type=1400 audit(1373455515.023:6): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs" dev="sdb3" ino=788321 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.312843] type=1400 audit(1373455515.023:7): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts" dev="sdb3" ino=788319 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.312848] type=1400 audit(1373455515.023:8): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.bin" dev="sdb3" ino=788287 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.312852] type=1400 audit(1373455515.023:9): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.homedirs" dev="sdb3" ino=788311 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.312857] type=1400 audit(1373455515.023:10): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin" dev="sdb3" ino=789293 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.312862] type=1400 audit(1373455515.023:11): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.local" dev="sdb3" ino=788322 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.312866] type=1400 audit(1373455515.023:12): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.local.bin" dev="sdb3" ino=789292 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.312872] type=1400 audit(1373455515.023:13): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[   12.918384] type=1400 audit(1373455520.623:8603): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[   12.944437] type=1400 audit(1373455520.656:8604): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[   12.944444] type=1400 audit(1373455520.656:8605): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs" dev="sdb3" ino=788321 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[   12.944459] type=1400 audit(1373455520.656:8606): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts" dev="sdb3" ino=788319 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.944474] type=1400 audit(1373455520.656:8607): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.bin" dev="sdb3" ino=788287 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.944479] type=1400 audit(1373455520.656:8608): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.homedirs" dev="sdb3" ino=788311 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.944483] type=1400 audit(1373455520.656:8609): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin" dev="sdb3" ino=789293 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.944487] type=1400 audit(1373455520.656:8610): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.local" dev="sdb3" ino=788322 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[   12.944491] type=1400 audit(1373455520.656:8611): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.local.bin" dev="sdb3" ino=789292 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.944552] type=1400 audit(1373455520.656:8612): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/audit/auditd.conf" dev="sdb3" ino=786930 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:auditd_etc_t:s0 tclass=file

and with permissive

[    7.267262] type=1400 audit(1373458032.707:3): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[    7.292984] type=1400 audit(1373458032.733:4): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.322470] type=1400 audit(1373458032.763:5): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts" dev="sdb3" ino=788319 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.351426] type=1400 audit(1373458032.792:6): avc:  denied  { ioctl } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[    7.376759] type=1400 audit(1373458032.817:7): avc:  denied  { ioctl } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.405873] type=1400 audit(1373458032.846:8): avc:  denied  { ioctl } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts" dev="sdb3" ino=788319 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.526043] type=1400 audit(1373458032.966:9): avc:  denied  { getattr } for  pid=213 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.homedirs" dev="sdb3" ino=788311 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[    7.555685] type=1400 audit(1373458032.995:10): avc:  denied  { getattr } for  pid=213 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[    7.863647] type=1400 audit(1373458033.304:11): avc:  denied  { getattr } for  pid=213 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[    8.586831] type=1400 audit(1373458034.027:12): avc:  denied  { getattr } for  pid=278 comm="mdadm" path="/dev/snd/seq" dev="devtmpfs" ino=10999 scontext=system_u:system_r:mdadm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:sound_device_t:s0 tclass=chr_file
[   12.949450] type=1400 audit(1373458038.384:98): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/audit/audit.rules" dev="sdb3" ino=786933 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:auditd_etc_t:s0 tclass=file
[   12.949635] type=1400 audit(1373458038.391:102): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/var/log/audit/audit.log" dev="sdb3" ino=1837435 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:auditd_log_t:s0 tclass=file
[   12.949654] type=1400 audit(1373458038.391:103): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/config" dev="sdb3" ino=786745 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:selinux_config_t:s0 tclass=file
[   12.949665] type=1400 audit(1373458038.391:104): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts.subs_dist" dev="sdb3" ino=788320 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:file_context_t:s0 tclass=file
[   12.949675] type=1400 audit(1373458038.391:105): avc:  denied  { read open } for  pid=212 comm="systemd-readahe" path="/etc/selinux/targeted/contexts/files/file_contexts" dev="sdb3" ino=788319 scontext=system_u:system_r:readahead_t:s0 tcontext=unconfined_u:object_r:file_context_t:s0 tclass=file
[   12.949823] type=1400 audit(1373458038.391:106): avc:  denied  { ioctl } for  pid=212 comm="systemd-readahe" path="/etc/audit/audit.rules" dev="sdb3" ino=786933 scontext=system_u:system_r:readahead_t:s0 tcontext=system_u:object_r:auditd_etc_t:s0 tclass=file

Comment 59 Jes Sorensen 2013-07-10 15:30:25 UTC
Looks like -63 is still not good :(

[root@noisybay ~]# ls -Z /usr/lib/systemd/system/mdmon*
-rw-r--r--. root root system_u:object_r:systemd_unit_file_t:s0 /usr/lib/systemd/system/mdmon@.service
-rw-r--r--. root root system_u:object_r:systemd_unit_file_t:s0 /usr/lib/systemd/system/mdmonitor.service
[root@noisybay ~]# restorecon -Rv /usr/lib/systemd/system/mdmon*
[root@noisybay ~]# ls -Z /usr/lib/systemd/system/mdmon*
-rw-r--r--. root root system_u:object_r:systemd_unit_file_t:s0 /usr/lib/systemd/system/mdmon@.service
-rw-r--r--. root root system_u:object_r:systemd_unit_file_t:s0 /usr/lib/systemd/system/mdmonitor.service

Jes

Comment 60 Miroslav Grepl 2013-07-10 18:37:16 UTC
Strange, I see

# matchpathcon /usr/lib/systemd/system/mdmon*
/usr/lib/systemd/system/mdmonitor.service	system_u:object_r:mdadm_unit_file_t:s0
/usr/lib/systemd/system/mdmon@.service	system_u:object_r:mdadm_unit_file_t:s0

Comment 61 Jes Sorensen 2013-07-10 19:45:46 UTC
Ok, after a fresh install of -63 and running restorecon, it seem to work.

Now the question is whether I had gotten my test system into a weird stage
while debugging, or if this is something users will experience too when
running updates?

Jes

Comment 62 Fedora Update System 2013-07-11 08:57:50 UTC
selinux-policy-3.12.1-63.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/selinux-policy-3.12.1-63.fc19

Comment 63 Fedora Update System 2013-07-12 02:57:23 UTC
Package selinux-policy-3.12.1-63.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing selinux-policy-3.12.1-63.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-12762/selinux-policy-3.12.1-63.fc19
then log in and leave karma (feedback).

Comment 64 Fedora Update System 2013-07-14 03:38:31 UTC
selinux-policy-3.12.1-63.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.