Description of problem: dmraid-activation.service drags in deprecated systemd-udev-settle.service, which increases boot times by 1-3 seconds. Version-Release number of selected component (if applicable): dmraid-1.0.0.rc16-43.fc31.x86_64 How reproducible: Always, every boot Steps to Reproduce: 1. Clean install Fedora Workstation 32/Rawhide, reboot 2. 3. Actual results: Longer boot times as a result of depending on udev-settle service. $ systemctl list-dependencies dmraid-activation.service dmraid-activation.service ● ├─system.slice ● └─systemd-udev-settle.service vendor preset: enabled Expected results: a. It shouldn't depend on udev-settle.service b. The vendor preset probably should be disabled; anyone who needs this can start/enabled it. Additional info:
We have been discussing at least disabling the dmraid service by default in bug 1796437. Let me copy and paste a few relevant bits from there for further discussion here: In bug 1796437 Chis Murphy wrote: But even if dmraid-activation.service is fixed to no longer drag in udev-settle, I wonder whether dmraid-activation.service "is-enabled" being set to enabled/disabled is appropriate. Disabling things should be fail safe. Users shouldn't be able to disable a startup critical service and then end up abandoned on a deserted island. Should it be static and maybe have a generator conditionally trigger it instead? I have no idea how expensive it is to search for dmraid supported signatures. Does libblkid find these signatures just like any other file system, during early startup? If so a generator could act on libblkid discovering something relevant, and kicking off a static dmraid-activation.service. If this is expensive, then I wonder if it needs separate deep scan and quick scan services? Use the deep scan only on install media. And the quick scan requires a cheap hint, like a symlink or configuration file existing; or maybe an rd.XXX hint?
I believe that bklid will recognize some of the firmware raid signatures which dmraid deals with, but I'm not sure if it will recognize all of them. Although I agree that dmraid activation should probably be reworked to be triggered by udev rules rather then this being run as a service, I'm afraid that we do not have the resources to actually make this change. I think that in the light of the resource constraints the best solution might very well be to have the dmraid activation script touch a /var/lib/dmraid/no_dmraid_sets_found file if not sets are found and add a: ConditionPathExists=!/var/lib/dmraid/no_dmraid_sets_found To the dmraid service file, this way we will still can for dmraid sets when starting a livecd (so anaconda does not have to manually do this and useful for using the livecd as rescue disk) and we only to the dmraid scan once on systems from the livecd, after which it will no longer start due to the condition. One issue with this approach is that the Wants=systemd-udev-settle.service gets processes by systemd before it checks conditions, so this still drags in systemd-udev-settle.service another option might be to have the dmraid activation script disable the service when no dmraid sets are found. Now that I think about it thi might actually be the better solution.
systemd-udev-settle.service is deprecated for 5+ years, there must be some other facility or mechanism that dmraid-activation.service should be using instead? And if it were doing that, would the startup delay as a result of keeping dmraid-activation.service enabled be significant?
Changing the dmraid activation to not use udev-settle is quite tricky. This would require a whole bunch of things: 1) blkid to recognize all metadata formats dmraid supports 2) A whole new set of udev rules + units which call dmraid totry and assemble the set, but only if it is complete, based on events triggered by the blkid info identifying the metadata 3) Some sort of timeout mechanism to try assembling a RAID set in degraded mode in case not all members are present As I said before I do not believe we have the resources atm to make this happen, so a small and simple modification to the shell-script which does the dmraid activation seems best. It is easy to detect in that script if there are no dmraid sets on the system and then the script can just do: systemctl disable dmraid-activation.service To me that seems like the best compromise here, given the available resources.
This bug appears to have been reported against 'rawhide' during the Fedora 32 development cycle. Changing version to 32.
My Fedora, after a "simple Kernel update", and boot renders a PITA! Beginning at Kernel: 5.5.13-200.fc31.x86_64 (now, Kernel: 5.5.16-200...), very slow boot; a detail: if I do not attach my RTL8821 network combo (it has two Vendor Id´s: 0bda:1a2b (Disk, Realtek Semiconductor Corp.) and 0bda:c811 (RTL8821 Network dongle); the 1a2b device ought be disabled by Driver or by a Udev rule, I chose to set one, and it worked until this Kernel series update: $ =>cat /etc/udev/rules.d/52-remdisk.rules # Realtek 8821CU Wifi AC USB ATTR{idVendor}=="0bda", ATTR{idProduct}=="1a2b", RUN+="/usr/sbin/usb_modeswitch -KQ -v 0bda -p 1a2b" I even do not need check Udev rule. It failed. I need run usb_modeswitch by hand, when this occurs; to modify lvm.conf, via filters, does not help at all. Even I mask System-Udev-Settle Service, it renders a torment; boot incredibly slow.
(In reply to Morvan from comment #6) > My Fedora, after a "simple Kernel update", and boot renders a PITA!... An excerpt from $ =>systemd-analyze blame : 41.768s systemd-udev-settle.service 30.074s dracut-pre-pivot.service 5.164s nmb.service ...
Here we are again. Fedora 32 (5.6.7-300.fc32), as stated, suffers from same problem: 2min 858ms systemd-udev-settle.service 30.074s dracut-pre-pivot.service 3.518s NetworkManager-wait-online.service 1.616s udisks2.service 1.464s upower.service 1.175s dracut-initqueue.service... Some idea? Remove dmraid from dracut? Masking services seems no help.
(In reply to Morvan from comment #8) > Some idea? Remove dmraid from dracut? Masking services seems no help. As a workaround you can do: dnf remove dmraid device-mapper-multipath Note this assumes that you are not using a BIOS managed RAID set, if you are using such a RAID set, then you actually need dmraid.
(In reply to Hans de Goede from comment #9) > (In reply to Morvan from comment #8) > > Some idea? Remove dmraid from dracut? Masking services seems no help. > > As a workaround you can do: > > dnf remove dmraid device-mapper-multipath > > Note this assumes that you are not using a BIOS managed RAID set, if you are > using such a RAID set, then you actually need dmraid. Hi, Hans. Thanks for fast response. I will try. No, I really need not dmraid.
After doing Hans suggestion, I stood this: ... 30.070s dracut-pre-pivot.service 6.270s NetworkManager-wait-online.service 2.262s upower.service 1.240s dracut-initqueue.service 971ms systemd-homed.service 811ms systemd-logind.service 733ms systemd-machined.service ... (it is a long time, yet; thanks Hans, but I will try to circumvent this Dracut Pre Pivot, any how).
*** Bug 1816885 has been marked as a duplicate of this bug. ***
Note I plan to implement the workaround mentioned in comment 4 for Fedora 33 (and later) I am in the progress of creating a change page for this, even though it is a small change, so as to get this properly documented in case the workaround ends up causing issues for anyone (it shouldn't but you never know). For the Change page see: https://fedoraproject.org/wiki/Changes/DisableDmraidOnFirstRun
(In reply to Hans de Goede from comment #4) > systemctl disable dmraid-activation.service i tried that, but still extreme long boot time. 2min 52.233s dracut-initqueue.service 2min 52.073s systemd-cryptsetup@luks\x2d509163fe\x2d882c\x2d47f7\x2d9d8d\x2de4600c0e048b.service 1min 4.935s systemd-udev-settle.service 3.302s NetworkManager-wait-online.service 3.263s lvm2-monitor.service 2.205s smartd.service 1.478s systemd-journal-flush.service 1.104s upower.service 1.023s udisks2.service 689ms akmods.service 677ms systemd-logind.service 650ms initrd-switch-root.service 642ms firewalld.service 416ms lvm2-pvscan@253:0.service 358ms sssd.service 314ms ModemManager.service 268ms systemd-homed.service 244ms avahi-daemon.service ... any idea?
this is on a ryzen 3700x with 16gb ram and a 1tb samsung pro 860 ssd... that is a real PITA
(In reply to oliver.zemann from comment #14) > any idea? I see that systemd-udev-settle is still in the list. This is likely caused by device-mapper-multipath, try doing: "sudo dnf remove device-mapper-multipath" Note this will likely also caused anaconda to be removed, that is fine as anaconda is only needed during installation time. Also see: https://fedoraproject.org/wiki/Changes/RemoveDeviceMapperMultipathFromWorkstationLiveCD
(In reply to Hans de Goede from comment #16) ... > I see that systemd-udev-settle is still in the list. This is likely caused > by device-mapper-multipath, try doing: > > "sudo dnf remove device-mapper-multipath" > ... I had already removed from F31 and made just now for F32. Warnings of DNF (PT_Br): Removendo: device-mapper-multipath x86_64 0.8.2-6.fc33 @koji 289 k Removendo pacotes dependentes: libblockdev-mpath
i removed it now. hope my system still boots ;) [root@localhost oli]# dnf remove device-mapper-multipath Abhängigkeiten sind aufgelöst. ========================================================================================================================================================================================================== Package Architecture Version Repository Size ========================================================================================================================================================================================================== Entfernen: device-mapper-multipath x86_64 0.8.2-4.fc32 @updates 289 k Abhängige Pakete werden entfernt: kdump-anaconda-addon noarch 005-8.20200220git80aab11.fc32 @anaconda 132 k libblockdev-mpath x86_64 2.24-1.fc32 @updates 28 k Nicht benötigte Abhängigkeiten werden entfernt: anaconda x86_64 32.24.7-2.fc32 @updates 0 anaconda-install-env-deps x86_64 32.24.7-2.fc32 @updates 0 createrepo_c x86_64 0.15.11-1.fc32 @updates 200 k createrepo_c-libs x86_64 0.15.11-1.fc32 @updates 258 k device-mapper-multipath-libs x86_64 0.8.2-4.fc32 @updates 872 k drpm x86_64 0.5.0-1.fc32 @updates 133 k fcoe-utils x86_64 1.0.32-9.git9834b34.fc31 @anaconda 333 k gcc-gdb-plugin x86_64 10.1.1-1.fc32 @updates 338 k gdb x86_64 9.1-5.fc32 @updates 381 k isomd5sum x86_64 1:1.2.3-8.fc32 @anaconda 68 k libblockdev-plugins-all x86_64 2.24-1.fc32 @updates 0 libblockdev-vdo x86_64 2.24-1.fc32 @updates 36 k libbsd x86_64 0.10.0-2.fc32 @anaconda 345 k lldpad x86_64 1.0.1-16.git036e314.fc32 @anaconda 744 k tmux x86_64 3.0a-2.fc32 @anaconda 851 k udisks2-iscsi x86_64 2.8.4-4.fc32 @anaconda 117 k userspace-rcu x86_64 0.11.1-3.fc32 @anaconda 413 k
(In reply to oliver.zemann from comment #18) > i removed it now. hope my system still boots ;) > > [root@localhost oli]# dnf remove device-mapper-multipath > Abhängigkeiten sind aufgelöst. > ============================================================================= > ============================================================================= > ================================================ > Package Architecture > Version Repository > Size > ============================================================================= > ============================================================================= > ================================================ > Entfernen: > device-mapper-multipath x86_64 > 0.8.2-4.fc32 @updates > 289 k > Abhängige Pakete werden entfernt: > kdump-anaconda-addon noarch > 005-8.20200220git80aab11.fc32 @anaconda > 132 k > libblockdev-mpath x86_64 > 2.24-1.fc32 @updates > 28 k > Nicht benötigte Abhängigkeiten werden entfernt: > anaconda x86_64 > 32.24.7-2.fc32 @updates > 0 > anaconda-install-env-deps x86_64 > 32.24.7-2.fc32 @updates > 0 > createrepo_c x86_64 > 0.15.11-1.fc32 @updates > 200 k > createrepo_c-libs x86_64 > 0.15.11-1.fc32 @updates > 258 k > device-mapper-multipath-libs x86_64 > 0.8.2-4.fc32 @updates > 872 k > drpm x86_64 > 0.5.0-1.fc32 @updates > 133 k > fcoe-utils x86_64 > 1.0.32-9.git9834b34.fc31 @anaconda > 333 k > gcc-gdb-plugin x86_64 > 10.1.1-1.fc32 @updates > 338 k > gdb x86_64 > 9.1-5.fc32 @updates > 381 k > isomd5sum x86_64 > 1:1.2.3-8.fc32 @anaconda > 68 k > libblockdev-plugins-all x86_64 > 2.24-1.fc32 @updates > 0 > libblockdev-vdo x86_64 > 2.24-1.fc32 @updates > 36 k > libbsd x86_64 > 0.10.0-2.fc32 @anaconda > 345 k > lldpad x86_64 > 1.0.1-16.git036e314.fc32 @anaconda > 744 k > tmux x86_64 > 3.0a-2.fc32 @anaconda > 851 k > udisks2-iscsi x86_64 > 2.8.4-4.fc32 @anaconda > 117 k > userspace-rcu x86_64 > 0.11.1-3.fc32 @anaconda > 413 k Tell us, anyway.
still "slow" (compared to windows, sorry :/ ) but way way better. and the machined booted :) thanks! [root@localhost oli]# systemd-analyze blame 14.491s dracut-initqueue.service 14.228s systemd-cryptsetup@luks\x2d509163fe\x2d882c\x2d47f7\x2d9d8d\x2de4600c0e048b.service 3.366s NetworkManager-wait-online.service 1.626s lvm2-monitor.service 1.510s smartd.service 1.083s udisks2.service 1.044s upower.service 643ms systemd-logind.service 581ms firewalld.service 530ms lvm2-pvscan@253:0.service 519ms initrd-switch-root.service 495ms systemd-udev-settle.service 491ms akmods.service 418ms systemd-journal-flush.service 327ms sssd.service 312ms ModemManager.service 311ms avahi-daemon.service 306ms rtkit-daemon.service 299ms systemd-homed.service 285ms dbus-broker.service 233ms systemd-udevd.service 213ms plymouth-quit.service 213ms plymouth-quit-wait.service 211ms systemd-hostnamed.service 163ms systemd-journald.service 163ms systemd-userdbd.service 135ms dracut-pre-udev.service 130ms systemd-fsck@dev-disk-by\x2duuid-62EB\x2d1CF3.service 126ms systemd-fsck@dev-disk-by\x2duuid-8b61ffc1\x2d97be\x2d46b0\x2db390\x2d5bf5d6083af8.service 125ms dracut-cmdline.service 117ms user 106ms dnfdaemon.service 100ms initrd-parse-etc.service
(In reply to oliver.zemann from comment #20) > still "slow" (compared to windows, sorry :/ ) but way way better. and the > machined booted :) thanks! ... 495ms systemd-udev-settle.service ... Still this plague: systemd-udev-settle.service! How to get rid of?
An amount of time without testing (even do not attaching two vendor Id device, Wifi Network dongle and disk containing drivers (for Window)). When I tested, just now, I receive this, after "sudo dnf remove device-mapper-multipath": 2min 952ms systemd-udev-settle.service 2min 885ms dracut-initqueue.service 30.093s dracut-pre-pivot.service 5.110s nmb.service 3.692s home.mount 3.479s udisks2.service 1.708s upower.service 1.238s dkms.service ... I give up (unless, with the dongle)!
Morvan, can you run the following command from a terminal: grep -l udev-settle /lib/systemd/system/*.service And copy and paste the output here?
(In reply to Hans de Goede from comment #23) > Morvan, can you run the following command from a terminal: > grep -l udev-settle /lib/systemd/system/*.service > And copy and paste the output here? Yes. I run and it returns: $ =>grep -l udev-settle /lib/systemd/system/*.service /lib/systemd/system/anaconda-direct.service /lib/systemd/system/anaconda-pre.service /lib/systemd/system/dmraid-activation.service /lib/systemd/system/initrd-udevadm-cleanup-db.service /lib/systemd/system/systemd-udev-settle.service ...
(In reply to Morvan from comment #24) > (In reply to Hans de Goede from comment #23) > > Morvan, can you run the following command from a terminal: > > grep -l udev-settle /lib/systemd/system/*.service > > And copy and paste the output here? > > Yes. I run and it returns: > $ =>grep -l udev-settle /lib/systemd/system/*.service > /lib/systemd/system/anaconda-direct.service > /lib/systemd/system/anaconda-pre.service > /lib/systemd/system/dmraid-activation.service > /lib/systemd/system/initrd-udevadm-cleanup-db.service > /lib/systemd/system/systemd-udev-settle.service > ... You do have disabled the dmraid-activation.service, right ? Also I tought the removal of device-mapper-multipath also removed anaconda ? Anyways please remove anaconda you do not need it after installation.
$ =>sudo dnf remove anaconda No match for argument: anaconda Nenhum pacote marcado para remoção. Strangely, DNF does not finds it! Then I tried via RPM. $ =>sudo rpm -qa "anaconda*" anaconda-widgets-33.15-2.fc33.x86_64 anaconda-user-help-26.1-11.fc32.noarch anaconda-core-33.15-2.fc33.x86_64 anaconda-gui-33.15-2.fc33.x86_64 anaconda-live-33.15-2.fc33.x86_64 anaconda-tui-33.15-2.fc33.x86_64 $ =>sudo rpm -qa "anaconda*" | xargs rpm -ev erro: Dependências não satisfeitas: anaconda-gui >= 32.15-1 é requerido por (instalado) initial-setup-gui-0.3.81-1.fc33.x86_64 anaconda-tui >= 32.15-1 é requerido por (instalado) initial-setup-0.3.81-1.fc33.x86_64 Then, I type: sudo dnf remove initial-setup # =>rpm -qa "anaconda*" | xargs rpm -ev Preparing packages... anaconda-live-33.15-2.fc33.x86_64 anaconda-gui-33.15-2.fc33.x86_64 anaconda-user-help-26.1-11.fc32.noarch anaconda-core-33.15-2.fc33.x86_64 anaconda-tui-33.15-2.fc33.x86_64 anaconda-widgets-33.15-2.fc33.x86_64 Ready!
Ok, I've gone ahead and implemented the suggested change myself: diff --git a/fedora-dmraid-activation b/fedora-dmraid-activation index 528adba..4578b21 100644 --- a/fedora-dmraid-activation +++ b/fedora-dmraid-activation @@ -21,5 +21,7 @@ if ! strstr "$cmdline" nodmraid && [ -x /sbin/dmraid ]; then /sbin/kpartx -u -a "/dev/mapper/$dmname" done IFS=$SAVEIFS + elif [ "$dmraidsets" = "no raid disks" ]; then + systemctl disable dmraid-activation.service fi fi While at it I've also done some small specfile cleanups. This is now building for rawhide.
(In reply to Hans de Goede from comment #27) > Ok, I've gone ahead and implemented the suggested change myself: > > diff --git a/fedora-dmraid-activation b/fedora-dmraid-activation > index 528adba..4578b21 100644 > --- a/fedora-dmraid-activation ... Good to know. Thanks.😊
Just a quick note on this, the fix for this has not landed yet, because dmraid is failing to build on s390x due to a gcc bug, this is being tracked in bug 1860854. I will build the new dmraid for rawhide as soon as the gcc bug is resolved.
any news on that? facing it now on f33, very annoying :/
(In reply to bugzilla from comment #30) > any news on that? facing it now on f33, very annoying :/ This should be fixed on f33, on f33 the script executed by the dmraid-activation.service systemd unit looks like this: /lib/systemd/fedora-dmraid-activation: #!/usr/bin/bash # # Activation of dmraid sets. # . /etc/init.d/functions [ -z "${cmdline}" ] && cmdline=$(cat /proc/cmdline) if ! strstr "$cmdline" nodmraid && [ -x /sbin/dmraid ]; then modprobe dm-mirror >/dev/null 2>&1 dmraidsets=$(LC_ALL=C /sbin/dmraid -s -c -i) if [ "$?" = "0" ]; then SAVEIFS=$IFS IFS=$(echo -en "\n\b") for dmname in $dmraidsets; do if [[ "$dmname" == isw_* ]] && \ ! strstr "$cmdline" noiswmd; then continue fi /sbin/dmraid -ay -i --rm_partitions -p "$dmname" >/dev/null 2>&1 /sbin/kpartx -u -a "/dev/mapper/$dmname" done IFS=$SAVEIFS # dmraid says "no block devices found" on machines with an eMMC elif [ "$dmraidsets" = "no raid disks" -o "$dmraidsets" = "no block devices found" ]; then systemctl disable dmraid-activation.service fi fi So unless your machine actually has a BIOS/firmware RAID set managed by dmraid, the service should disable itself after the first F33 boot. If it doesn't can you provide the output of running the following from a root shell ("sudo su -" to get a root shell): LC_ALL=C /sbin/dmraid -s -c -i; echo $? Please copy and paste the output of that command into your next bugzilla comment and then we will see from there.
[root@localhost-live ~]# LC_ALL=C /sbin/dmraid -s -c -i; echo $? -bash: /sbin/dmraid: Datei oder Verzeichnis nicht gefunden 127 i used the workaround and removed dmraid with dnf remove dmraid device-mapper-multipath
hrm, forgot that their is no edit function here... sry. wanted to add: i installed f33 xfce spin and have encrypted / with luks. it took many minutes until it booted so i hit ESC and noticed that udev thing i had on F32 too. so i removed that and now it works again fast.
(In reply to bugzilla from comment #32) > I used the workaround and removed dmraid with dnf remove dmraid > device-mapper-multipath So did that command also remove device-mapper-multipath? I removed device-mapper-multipath from the Workstation livecd, see: https://fedoraproject.org/wiki/Changes/RemoveDeviceMapperMultipathFromWorkstationLiveCD The changes I made are to the base livecd files, so other spins should have inherited the change but maybe the XFCE spin for some reason still included device-mapper-multipath. Can you try re-installing dmraid (and not device-mapper-multipath) and then see if the problem returns. Note the problem is expected to return on the first reboot, but it should be gone (even with dmraid installed) after the second (and following) reboot(s). If the problem returns when installing just dmraid, please provide the debugging info which I requested in comment 31 before removing dmraid again.
> So did that command also remove device-mapper-multipath? umm.. tbh i did not carefully check... i mean, it removed some stuff, so ... maybe it also removed that. can i somehow check that in a dnf log or so? i installed now dmraid and will reboot later and provide additional info. thanks for help!
(In reply to bugzilla from comment #35) > > So did that command also remove device-mapper-multipath? > umm.. tbh i did not carefully check... i mean, it removed some stuff, so ... > maybe it also removed that. can i somehow check that in a dnf log or so? If it as not too long ago you should be able to see the transaction where dmraid was removed in: /var/log/dnf.log, or in one of the (older) /var/log/dnf.log.? files.
> If it as not too long ago you should be able to see the transaction where dmraid was removed in: thanks! there was NO device-mapper-multipath installed, it just ignored the non-existent package. thought dnf would fail in such a scenario. ill test the reboot later, have some stuff to do :/
rebooted now 2 times and it occured again both times [root@localhost-live oli]# LC_ALL=C /sbin/dmraid -s -c -i; echo $? ERROR: isw: wrong number of devices in RAID set "isw_bifaffceif_Raid" [1/2] on /dev/sdc isw_bifaffceif_Raid 0
(In reply to bugzilla from comment #38) > rebooted now 2 times and it occured again both times > > [root@localhost-live oli]# LC_ALL=C /sbin/dmraid -s -c -i; echo $? > ERROR: isw: wrong number of devices in RAID set "isw_bifaffceif_Raid" [1/2] > on /dev/sdc > isw_bifaffceif_Raid > 0 Ok, so it looks like one of your disks used to be part of a firmware managed Intel RAID set at one point in time and it still has the Intel firmware-RAID metadata on it. This is possibly also why the udev-settle takes such a long time. Note the following should be safe to do, but I advice you to make sure you have backups of any data you care about before you do this! You can see which disk has the (presumable stale / old / wrong) firmware-RAID metadata by running: sudo dmraid -r This should show one disk with raid-metadata, say /dev/sda, you can then remove the metadata by doing: sudo dmraid -E /dev/sda Note isw RAID sets are handled by mdraid by default, so removing the metadata should also stop mdraid from running at boot and trying to assemble the raid array which it likely is doing atm.
[root@localhost-live oli]# dmraid -r ERROR: isw: Could not find disk /dev/sdd in the metadata /dev/sdc: isw, "isw_bifaffceif", GROUP, ok, 3907029166 sectors, data@ 0 [root@localhost-live oli]# dmraid -E /dev/sdc ERROR: option missing/invalid option combination with -E
[root@localhost-live oli]# dmraid -r -E /dev/sdc Do you really want to erase "isw" ondisk metadata on /dev/sdc ? [y/n] :y i made a backup before, so guess its ok and removed now. if i do not respond, it works - thanks :)
its still slow :( [root@minint-2dhlc34 oli]# dmraid -r no raid disks [root@minint-2dhlc34 oli]# LC_ALL=C /sbin/dmraid -s -c -i; echo $? no raid disks 1
Run 'systemd-analyze plot > bug1795014-slowboot.svg' and attach.
its so wierd, i restarted now a couple of times and after the 3rd time or so its working now ill attach the svg but guess its done, thanks for your help!
Created attachment 1725406 [details] systemd plot
OK is it fair to say it took about 25 seconds to enter an encryption passphrase? systemd-cryptsetup@luks\x2df9f5a759\x2d80ce\x2d4010\x2db6bc\x2dacaad619f37e.service (26.649s) That's the only major delay I see. There are suspicious other delays that aren't dmraid related so we should take them up elsewhere. smartd, sssd, upowerd, abrtd, all look suspiciously slow. Looks like sd-boot too, you're getting separate count for firmware and loader, and they are also quite lengthy, though I'm not sure why. I'd expect that part is the same no matter what other changes you'd make are.
I made all the rituals over Anaconda, removed dmraid and no help. Then I restored it, unless for testing. LSUSB report: $ lsusb ... Bus 005 Device 003: ID 0bda:1a2b Realtek Semiconductor Corp. RTL8188GU 802.11n WLAN Adapter (Driver CDROM Mode) ... (it seems, with dmraid and dependent scripts, device 0bda: 1a2b (disk) is "usbmodeswitched"...). Systemd-Analyze time: # systemd-analyze time Startup finished in 11.297s (firmware) + 18.026s (loader) + 4.445s (kernel) + 2min 2.509s (initrd) + 2min 22.318s (userspace) = 4min 58.596s graphical.target reached after 1min 36.608s in userspace (five minutes! It is no worthy to buy a SSD (NVME) for this. Awful.).
regarding performance, i entered now the password in less than 1.5 seconds (stopwatch) attached the new .svg - can you tell me if something could be better? i mean it feels ok, but windows 10 boots really way way faster (but has also no encryption)
Created attachment 1725517 [details] systemd plot nr. 2
systemd-cryptsetup@luks\x2df9f5a759\x2d80ce\x2d4010\x2db6bc\x2dacaad619f37e.service (13.069s) I don't know why it's taking this long, but it's not dmraid related so I don't think discussion in this bug report is going to get it the correct attention. I suggest raising the issues on fedora-devel@ for ideas how to narrow down the issue. Maybe it's cryptsetup or systemd related, because it is a systemd unit that's responsible for it but the actual problem could be in cryptsetup. Or even udev. It's premature to file a bug against some unknown component before isolating the problem, and I think fedora-devel@ might help figure out how to isolate and where it's happening. My strong recommendation is do a clean boot. Time it from the moment the boot entry is selected, with that stopwatch time you should note the first moment you see the plymouth cryptsetup passphrase entry UI, and the time it took to enter the passphrase and hit enter - that way these user observations and tasks can be matched up (close enough) to the monotonic time stamp in the journal. Create an svg plot for this boot. And a matching 'journalctl -b -o short-monotonic --no-hostname > journal-slowboot.txt' to go with it.
(In reply to Chris Murphy from comment #50) ... > I don't know why it's taking this long, but it's not dmraid related so I > don't think discussion in this bug report is going to get it the correct > attention... You got the kernel (intentional: pun). It is not *noway* dmraid related, so I removed it and my system prevents me from attaching a simple disk, whose is treated as a dm device (and I invoke dmraid to enumerate them and receive 'zero device'). To remove ...udev.settle... also does not help. Alias, a suggestion to maintainers: why do not remove anaconda related at first boot, once it is not needed anymore?
(In reply to bugzilla from comment #44) > its so wierd, i restarted now a couple of times and after the 3rd time or so > its working now That is expected, as I tried to explain before when the /lib/systemd/fedora-dmraid-activation sees the "no raid disks" error it will disable the service. So on the first boot after erasing the firmware-raid metadata it would still run and drag in systemd-udev-settle.service. If you are now no longer seeing systemd-udev-settle.service in "systemd-analyze blame" output, then the dmraid issue is resolved and any remaining boot slowness is caused by something else (and thus out of scope for this bug).