Description of problem: Boot process hangs in loop with- (1 of 3) A start job is running for dev_mapper_vg01_mm\xzdlv01_usr.local.device (2 of 3) ditto ...05_opt.device (3 of 3) ditto ...03_home.device (1 of 3) ... The only way to get out of that is ctl-alt-del, then Shutdown process hangs in loop with- (1 of 9) A stop job... ... (4 of 9) ... for LVMPV1aivCz-qiNs-ikLy-vA1q-ToYM-1Ycp-Oa81Db on /dev/dm-1 ... The only way out of that is the power switch... The necessity of using the power switch to shutdown the system seems to indicate to me a serious problem - so I've set severity to high. Version-Release number of selected component (if applicable): systemd-208-14.fc20.x86_64 How reproducible: Apparently random, but reoccurring since upgrade to f20 from f18 Steps to Reproduce: 1. upgrade to f20 from f18 2. boot f20 3. Actual results: Hung system with only way to un-hang system being the power switch Expected results: System that doesn't hang at boot or shutdown Additional info: Some way of escaping/ cancelling from start jobs and stop jobs is urgently needed
Created attachment 909873 [details] Script to gather lvm and systemd info Could you please use attached script to gather the info - it's calling lvmdump, lsblk and systemctl/journalctl to gather info we can use for further debugging. The best would be if you could run this the next boot after the one that failed. Then attach the archive file created by the script here in this report. Thanks.
Does adding ' rd.lvm.lv=vg01/lv01_usr' (or whatever the VG and LV names are) to boot options help?
(In reply to Peter Rajnoha from comment #1) > Created attachment 909873 [details] > Script to gather lvm and systemd info > > Could you please use attached script to gather the info - it's calling > lvmdump, lsblk and systemctl/journalctl to gather info we can use for > further debugging. The best would be if you could run this the next boot > after the one that failed. > Then attach the archive file created by the script here in this report. > Thanks. Just for clarity - you want that run manually under gdm once the system has booted and I've logged into user gnome account - or kicked off somehow during the boot process?
(In reply to Marian Csontos from comment #2) > Does adding ' rd.lvm.lv=vg01/lv01_usr' (or whatever the VG and LV names are) > to boot options help? So that would be in /etc/default/grub here: GRUB_CMDLINE_LINUX="rd.lvm.lv=vg01_mm/lv04_swap rd.md=0 rd.dm=0 rd.lvm.lv=vg01_mm/lv02_root rd.luks.uuid=luks-dfa915d4-6656-45f0-b771-595f2456fc39 $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) vconsole.keymap=us rd.luks.uuid=luks-81b063f2-7753-4006-b0af-30ae9d472ccd rhgb quiet" ? Just looking at that, already there are: rd.lvm.lv=vg01_mm/lv02_root rd.lvm.lv=vg01_mm/lv04_swap So, including additionally: rd.lvm.lv=vg01_mm/lv01_usr.local rd.lvm.lv=vg01_mm/lv03_home rd.lvm.lv=vg01_mm/lv05_opt Might help? I'll try Peter's suggestions first, otherwise I might end up chassing my tail. This is pretty intermitant. So, it might be a while before I get another 'hit'.
(In reply to morgan read from comment #4) > (In reply to Marian Csontos from comment #2) > > Does adding ' rd.lvm.lv=vg01/lv01_usr' (or whatever the VG and LV names are) > > to boot options help? > > So that would be in /etc/default/grub here: > > GRUB_CMDLINE_LINUX="rd.lvm.lv=vg01_mm/lv04_swap rd.md=0 rd.dm=0 > rd.lvm.lv=vg01_mm/lv02_root > rd.luks.uuid=luks-dfa915d4-6656-45f0-b771-595f2456fc39 $([ -x > /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) > vconsole.keymap=us rd.luks.uuid=luks-81b063f2-7753-4006-b0af-30ae9d472ccd > rhgb quiet" > > ? > > Just looking at that, already there are: > rd.lvm.lv=vg01_mm/lv02_root > rd.lvm.lv=vg01_mm/lv04_swap > > So, including additionally: > rd.lvm.lv=vg01_mm/lv01_usr.local > rd.lvm.lv=vg01_mm/lv03_home > rd.lvm.lv=vg01_mm/lv05_opt > > Might help? Yes, it might. Now I see you are using LUKS. What exactly is encrypted? LVs? PVs? We really need the output of scripts and commands Peter and Bryn asked for. (In reply to morgan read from comment #3) > (In reply to Peter Rajnoha from comment #1) > > Created attachment 909873 [details] > > Script to gather lvm and systemd info > > > > Could you please use attached script to gather the info - it's calling > > lvmdump, lsblk and systemctl/journalctl to gather info we can use for > > further debugging. The best would be if you could run this the next boot > > after the one that failed. > > Then attach the archive file created by the script here in this report. > > Thanks. > > Just for clarity - you want that run manually under gdm once the system has > booted and I've logged into user gnome account - or kicked off somehow > during the boot process? The next boot after the failed one. But running them NOW would help in meantime seeing what the layout is and may be reproduce the problem.
(In reply to Marian Csontos from comment #5) > > Just for clarity - you want that run manually under gdm once the system has > > booted and I've logged into user gnome account - or kicked off somehow > > during the boot process? > > The next boot after the failed one. But running them NOW would help in > meantime seeing what the layout is and may be reproduce the problem. Yes, the boot after the failed one - the script I attached grabs the logs from previous (failed) boot as well as current one. We can compare it then.
This hasn't troubled me in the last 10 months, so I think it can be safely closed unless someone thinks otherwise. Start and stop jobs always seem to resolve themselves. Thanks for listening.