Hide Forgot
Description of problem: I have four disks: two hdd: /dev/sda, /dev/sdb and two ssd: /dev/sdc, dev/sdd /dev/sda, /dev/sdb are bound to one raid1 volume with intel fake raid-1 (imsm) /dev/md126. I have gpt partition tables in /dev/md126, /dev/sdc and /dev/sdd. I have LVM physical volumes: /dev/md126p2, /dev/sdc5 and /dev/sdd5 I have a root filesystem in a lvm volume on top of /dev/sdc5 and /dev/sdd5. I have a lot of other lvm volumes with some filesystems, including /home and /var, on top of /dev/md126p2, mounted with /etc/fstab. The problem is that mdadm can not assemble /dev/md126: ------------------------------------------------------------------------- дек 08 22:04:21 oleg2.oleghome systemd-udevd[804]: Process '/sbin/mdadm -I /dev/sdb' failed with exit code 1. дек 08 22:04:21 oleg2.oleghome systemd-udevd[790]: Process '/sbin/mdadm -I /dev/sda1' failed with exit code 1. дек 08 22:04:21 oleg2.oleghome systemd-udevd[790]: inotify_add_watch(9, /dev/sda1, 10) failed: No such file or directory дек 08 22:04:21 oleg2.oleghome systemd-udevd[801]: Process '/sbin/mdadm -I /dev/sda2' failed with exit code 1. дек 08 22:04:21 oleg2.oleghome systemd-udevd[801]: inotify_add_watch(9, /dev/sda2, 10) failed: No such file or directory дек 08 22:04:21 oleg2.oleghome systemd-udevd[790]: Process '/sbin/mdadm -If sda1 --path pci-0000:00:1f.2-ata-1' failed with exit code 1. дек 08 22:04:21 oleg2.oleghome systemd-udevd[801]: Process '/sbin/mdadm -If sda2 --path pci-0000:00:1f.2-ata-1' failed with exit code 1. ------------------------------------------------------------------------- after booting I can see that lvm uses /dev/sdb2 instead of /dev/md126: [root@oleg2 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb2 vg_oleg2_raid1 lvm2 a-- 1,63t 186,38g /dev/sdc5 vg_oleg2_ssd0 lvm2 a-- 79,88g 0 /dev/sdd5 vg_oleg2_ssd0 lvm2 a-- 79,88g 0 Running mdadm -I /dev/sdb failes: [root@oleg2 ~]# mdadm -I /dev/sdb mdadm: cannot reopen /dev/sdb: Device or resource busy. Additional information: 1. When I run F24 on the same host (using another lvm logical volumes for /root and /var on the same disks), the raid-1 volume /dev/md126 is assembled correctly and lvm pvs shows /dev/md126p2 instead of /dev/sdb2. 2, When I run F25 live on the same host, the raid-1 volume /dev/md126 is also assembled correctly and lvm pvs shows /dev/md126p2 instead of /dev/sdb2. So my hypotese is the following: 1. F24 used to assamble mdadm raid in dracut before mounting filesystems. So mdadm could force lvm to release /dev/sda2 and /dev/sdb2 2. In F25 dracut does not more try to assemble mdadm raids and systemd does in the same time as mounting /var. So when mdadm tries to assembly the raid, /var is already mounted and lvm can not release /dev/sdb2. 3. In F25-live no filesystems are mounted from lvm volumes on top of /dev/sdb2, so lvm releases /dev/sdb2 and mdadm can assemble the raid. Version-Release number of selected component (if applicable): Fedora 25
1. How can I blacklist /dev/sda and /dev/sdb for reading their partition tables? 2. How can I make that systemd:local-fs-pre.target occure only after mdadm has assembled the raid?
I've found the workaround: blacklisting /dev/sda* and /dev/sdb* from lvm scanning by adding global_filter = [ "r|/dev/sd[ab].*|" ] to /etc/lvm/lvm.conf and rebuild initramfs with dracut. After this mdadm can use /dev/sda and /dev/sdb for assembling raids
What is your kernel command line? What is the output of: # dracut --print-cmdline ??
[root@oleg2 ~]# dracut --print-cmdline rd.lvm.lv=vg_oleg2_ssd0/lv_root rd.lvm.lv=vg_oleg2_raid1/lv_swap rd.md.uuid=c61f0d8f:2d086286:930da7f7:ef290d26 resume=/dev/mapper/vg_oleg2_raid1-lv_swap root=/dev/mapper/vg_oleg2_ssd0-lv_root rootfstype=ext4 rootflags=rw,noatime,seclabel,stripe=256,data=ordered [root@oleg2 ~]#
please try to specify rd.lvm.vg=vg_oleg2_ssd0 rd.md.uuid=c61f0d8f:2d086286:930da7f7:ef290d26 resume=/dev/mapper/vg_oleg2_raid1-lv_swap root=/dev/mapper/vg_oleg2_ssd0-lv_root for the kernel command line and see if that works
This message is a reminder that Fedora 25 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '25'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 25 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.