User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36 Build Identifier: I wanted to add another disk to a RAID6 array I have. So I ran $ mdadm --add /dev/md127 /dev/sdj1 $ mdadm --grow --raid-devices=8 --backup-file=/boot/grow_md127.bak /dev/md127 This appeared to work right, but looking at /proc/mdstat, it says $ cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid6 sdd1[8] sdj[6] sdg[0] sdk[4] sdh[1] sdi[2] sdc[5] sda1[7] 14650675200 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU] [>....................] reshape = 0.0% (1/2930135040) finish=445893299483.7min speed=0K/sec unused devices: <none> That is, it's stuck. And it's been that way since (about 36h now) Looking at some logs, I found this in messages: Jan 28 20:24:27 ooo systemd: Created slice system-mdadm\x2dgrow\x2dcontinue.slice. Jan 28 20:24:27 ooo audit: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-grow-continue@md127 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 20:24:27 ooo systemd: Starting system-mdadm\x2dgrow\x2dcontinue.slice. Jan 28 20:24:28 ooo audit: AVC avc: denied { write } for pid=11103 comm="mdadm" name="grow_md127.bak" dev="sdf1" ino=426 scontext=system_u:system_r:mdadm_t:s0 tcontext=unconfined_u:object_r:boot_t:s0 tclass=file permissive=0 Jan 28 20:24:28 ooo audit: SYSCALL arch=c000003e syscall=2 success=no exit=-13 a0=ec1fc0 a1=242 a2=180 a3=7800 items=0 ppid=1 pid=11103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294 967295 comm="mdadm" exe="/usr/sbin/mdadm" subj=system_u:system_r:mdadm_t:s0 key=(null) Jan 28 20:24:28 ooo systemd: mdadm-grow-continue: Main process exited, code=exited, status=1/FAILURE Jan 28 20:24:28 ooo systemd: mdadm-grow-continue: Unit entered failed state. Jan 28 20:24:28 ooo audit: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=mdadm-grow-continue@md127 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 20:24:28 ooo systemd: mdadm-grow-continue: Failed with result 'exit-code'. Jan 28 20:24:32 ooo setroubleshoot: SELinux is preventing /usr/sbin/mdadm from write access on the file grow_md127.bak. For complete SELinux messages. run sealert -l 43815f80-8b00-40d9-86a3-4a6a432f3e05 Jan 28 20:24:32 ooo python3: SELinux is preventing /usr/sbin/mdadm from write access on the file grow_md127.bak.#012#012***** Plugin kernel_modules (91.4 confidence) suggests ********************#012#012If you do not think m dadm should try write access on grow_md127.bak.#012Then you may be under attack by a hacker, since confined applications should not need this access.#012Do#012contact your security administrator and report this issue.#012#012** *** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that mdadm should be allowed write access on the grow_md127.bak file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# grep mdadm /var/log/audit/audit.log | audit2allow -M mypol#012# semodule -i mypol.pp#012 So it seems selinux is preventing writes to the backup file I specified. (I put it in /boot, since that's the only file system I have that's not on the array.) Interestingly, the file exists: $ ls -l /boot/grow_md127.bak -rw-------. 1 root root 15732736 Jan 28 20:24 /boot/grow_md127.bak $ I think the root cause here is that my invocation of mdadm was unconfined, because I was running it interactively. So it could create the backup file. But apparently, it fires something like an at job to finish up. And the background version starts via systemd, and runs as mdadm_t, which can't read that file (because it's in /boot). And poof. I'm not sure how to fix this. It needs access to that file, and it's user specified on the first invocation. The workaround in my case was to do the suggested audit2allow thing from the log and with that added rule installed: $ mdadm --grow --continue --backup-file=/boot/grow_md127.bak /dev/md127 The system is an uptodate Fedora 23, x86_64, with kernel 4.3.3-303.fc23.x86_64 and mdadm-3.3.4-2.fc23.x86_64, selinux-policy-3.13.1-158.2.fc23.noarch. Reproducible: Didn't try Steps to Reproduce: 1. Add a disk 2. $ mdadm --add /dev/mdX /dev/sdY 3. $ mdadm --grow --raid-devices=<new number> --backup-file=/boot/grow_md127.bak /dev/mdX 4. Observe that no progress is made in /proc/mdstat Actual Results: No progress on the reshape. Expected Results: Reshape should complete. This thread on linux-raid http://www.spinics.net/lists/raid/msg51400.html has background.
Yes. If it is started by systemd, there is a process transition from init_t to mdadm_t. Could you try to run it with # semanage permissive -a mdadm_t re-test and # ausearch -m avc -ts recent ? Thank you.
This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component.
This message is a reminder that Fedora 23 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 23. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '23'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 23 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.
(In reply to Miroslav Grepl from comment #1) > Yes. If it is started by systemd, there is a process transition from init_t > to mdadm_t. > > Could you try to run it with > > # semanage permissive -a mdadm_t > > re-test and > > # ausearch -m avc -ts recent > > ? > > Thank you. Sorry about the (long) delay. But I did lose a disk in that array, which led to some reorganizing, including reshapes. And it does seem to work now.