Bug 234745 - mdadm fails to start raid array
mdadm fails to start raid array
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-04-01 08:29 EDT by Bart Vanbrabant
Modified: 2007-11-30 17:12 EST (History)
3 users (show)

See Also:
Fixed In Version: Current
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-07-16 14:35:51 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Bart Vanbrabant 2007-04-01 08:29:35 EDT
Description of problem:
I've reinstalled a FC6 machine with the latest rawhide version. The initscripts
failed to start my raid array. I found this denial during the boot process in
the audit log:

type=AVC msg=audit(1175417139.101:171): avc:  denied  { read } for  pid=2230
comm="mdadm" name="md0" dev=tmpfs ino=158224
scontext=system_u:system_r:mdadm_t:s0 tcontext=root:object_r:device_t:s0
tclass=blk_file
type=SYSCALL msg=audit(1175417139.101:171): arch=40000003 syscall=5 success=no
exit=-13 a0=97c5190 a1=0 a2=0 a3=46 items=0 ppid=1 pid=2230 auid=4294967295
uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) comm="mdadm"
exe="/sbin/mdadm" subj=system_u:system_r:mdadm_t:s0 key=(null)

When I reboot with selinux in permissive mode my raid array is started correctly. 

Version-Release number of selected component (if applicable):
selinux-policy-2.5.10-2.fc7
selinux-policy-targeted-2.5.10-2.fc7
initscripts-8.51-1
mdadm-2.6.1-2.fc7

Do you need any more information?
Comment 1 Daniel Walsh 2007-04-02 13:26:43 EDT
This looks like a labeling problem.  /dev/md0 should be labeled
system_u:object_r:fixed_disk_device_t.  But on your machine it is labeled
system_u:object_r:device_t.  Is this device being created by udev?  If not then
it needs it's labeling fixed before mdadm uses it.  restorecon /dev/md0 will fix
it's labeleing.
Comment 2 Bart Vanbrabant 2007-04-02 14:19:50 EDT
This was a FC6 system which used to work fine with that raid array on which
/home was mounted. The harddisk with / on was dieing so I installed FC7t3 on a
new disk but I didn't let anaconda mount the raid array as /home. When I first
booted the fresh install the raid array wasn't assembled and there was the avc
listed above in the logs. After putting selinux in permissive mode the raid
array got assembled correctly and the lvm volumes on it got activated.
Comment 3 Daniel Walsh 2007-04-02 15:47:51 EDT
Jeremy, do you have any idea why the labeling would be wrong on /dev/md0?
Dan
Comment 4 Jeremy Katz 2007-04-03 15:38:40 EDT
Possibly mdadm is creating/recreating the node and not doing so with the right
labeling?

RAID is annoying in that you have to create the device node to start the array
for the kernel to have the block device for udev to do a device node creation. 
We did recently go to a new upstream bugfix version of mdadm
Comment 5 Daniel Walsh 2007-07-13 11:00:09 EDT
mdadm needs to either use udev to create its nodes of add selinux awareness to
create them with the correct context.
Comment 6 Jeremy Katz 2007-07-16 14:05:37 EDT
(In reply to comment #5)
> mdadm needs to either use udev to create its nodes

This can't be done -- as above, the node has to be created and then you do an
ioctl on the node to actually create the device.  After that, udev could kick
in, but not before :-)

> of add selinux awareness to
> create them with the correct context.

This is probably the correct (though slightly distasteful) answer much like we
had to do for device-mapper.
Comment 7 Daniel Walsh 2007-07-16 14:35:51 EDT
This one looks like it was fixed a long time ago.  mdadm should be creating
devices as fixed_disk_device_t now.  Looks correct in RHEL5/FC6 and beyond.

Note You need to log in before you can comment on or make changes to this bug.