Bug 996475 - [F19] Systemd doesn't unmount all devices before calling reboot/halt and thus corrupts a clean RAID1
Summary: [F19] Systemd doesn't unmount all devices before calling reboot/halt and thus...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 21
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Jes Sorensen
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-13 08:38 UTC by Joshua Covington
Modified: 2015-12-02 16:05 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-02 02:55:26 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
dmesg (69.00 KB, text/plain)
2013-08-13 19:13 UTC, Joshua Covington
no flags Details
dmesg when booting with systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M enforcing=0 (362.95 KB, text/plain)
2013-09-04 20:59 UTC, Joshua Covington
no flags Details


Links
System ID Private Priority Status Summary Last Updated
FreeDesktop.org 51220 0 None None None Never
Red Hat Bugzilla 753335 0 unspecified CLOSED mdadm starts resync on imsm raid even in Normal state 2021-02-22 00:41:40 UTC

Description Joshua Covington 2013-08-13 08:38:50 UTC
Description of problem:

This happens on fully updated f19-livecd

As stated in RHBZ #753335:
mdadm starts resync on imsm raid even BIOS reports "Normal" state (this state was the result of a successful sync until last poweroff.


Version-Release number of selected component (if applicable):
F19 fully updated from updates-testing as of 13.08.2013

I also recompiled systemd-206 from git but the problem still persists. Testing with kernel 3.10.6 (recompiled for f17) and mdadm-3.2.6 (f19 and f17 use the same version) on f17-livecd didn't trigger this bug. So it's obviously not related to the kernel nor to mdadm.


How reproducible:
Always

1. boot
2. let resync finish
3. poweroff
4. poweron
  
Actual results:
Resync starts again


Expected results:


Additional info:
See https://bugzilla.redhat.com/show_bug.cgi?id=753335

Comment 1 Jes Sorensen 2013-08-13 13:17:31 UTC
What do you mean by a fully updated livecd? did you rebuild the cd image with
the latest packages or did you do an install and update that?

Please provide your versions of mdadm, systemd, dracut, selinux-policy
and dmesg output

Thanks,
Jes

Comment 2 Joshua Covington 2013-08-13 14:16:13 UTC
(In reply to Jes Sorensen from comment #1)
> What do you mean by a fully updated livecd? did you rebuild the cd image with
> the latest packages or did you do an install and update that?
> 
> Please provide your versions of mdadm, systemd, dracut, selinux-policy
> and dmesg output
> 
> Thanks,
> Jes

I rebuild the livecd with the update-testing-repo enabled. The versions of the rpms are:
mdadm-3.2.6-19.fc19.x86_64
systemd-206-2.custom.fc19.x86_64 (I rebuilt this from git as of yesterday 2013-08-11, because I suspected it being at fault here)
selinux-policy-selinux-policy-3.12.1-69.fc19.noarch

I'll post dmesg later today.

Comment 3 Jes Sorensen 2013-08-13 14:23:13 UTC
Does this happen with the release version of the livecd?

Jes

Comment 4 Jes Sorensen 2013-08-13 14:25:06 UTC
and more importantly, does it happen if you do a proper install?

Comment 5 Joshua Covington 2013-08-13 19:13:09 UTC
Created attachment 786253 [details]
dmesg

(In reply to Jes Sorensen from comment #3)
> Does this happen with the release version of the livecd?
> 
> Jes

It doesn't happen with the release version of livecd. I see that dracut was updated since the release of f19. I tried with the latest dracut-031-29.git20130812.fc19.x86_64 (recompiled for f19) and the problem still persists. I even removed the call to /usr/lib/dracut/dracut-initram-restore and it didn't help either.

I have to say that I'm using a non-persitant livecd. So nothing should be changed after shutting down the machine.

The dmesg (from the respin with dracut-031-29.git20130812.fc19.x86_64) is attached.

I didn't try installing f19 because I'm affraid that this problem will still appear.

Comment 6 Jes Sorensen 2013-08-13 19:27:21 UTC
Well if it doesn't happen with the release version, it sounds unlikely to
be an mdadm problem - I haven't pushed updates to mdadm since Fedora 19
went live.

Harald any ideas if there are changes in dracut that could affect this?

Jes

Comment 7 Joshua Covington 2013-08-13 19:37:26 UTC
I don't suspect mdadm either. Is there a way to test this in a QEMU/VM environment? Or some way to get some log during shutting down? As I said this is a non-persistant updated and respinned f19.

Comment 8 Joshua Covington 2013-08-15 21:08:04 UTC
I tracked this down to the follofing systemd service (this is one of my custom services):

cat >> /etc/systemd/user/mnt-linux.mount  << MD126P1_EOF
#
#  Bind and mount the Fedora Partition on the RAID1 - /dev/sd[b-c] 
#  to /dev/md126p1
#  /dev/md126p1 -> /mnt/linux
#  mount -t ext4 /dev/md126p1 /mnt/linux

[Unit]
Description=Mount RAID1 /dev/md126p1 -> /mnt/linux
After=enumerate-mdadm-devices.service
After=multi-user.target
# skip mounting if the directory does not exist or is a symlink
#ConditionPathIsDirectory=/mnt/linux
ConditionPathIsSymbolicLink=!/mnt/linux

[Install]
WantedBy=default.target

[Mount]
What=/dev/md126p1
Where=/mnt/linux
Type=ext4
#Options=size=12G

MD126P1_EOF

root@localhost:# systemctl --no-reload enable /etc/systemd/user/mnt-linux.mount 2> /dev/null || :

Obviosly this service doesn't get shut down so that the volume doesn't get unmounted on reboot/halt, which leads to the resync at the next start.

I think this is a systemd problem because systemd should take care of unmointing all mounted volumes/devices after they've been resynced and before calling shut down/reboot/halt commands.

Reassigning to systemd (harald@xxx)

Comment 9 Harald Hoyer 2013-08-16 09:43:16 UTC
well, you forgot:

[Unit]
Conflicts=umount.target
Before=umount.target

Comment 10 Harald Hoyer 2013-08-16 09:45:39 UTC
Also addressing the device via the kernel enumeration is fragile.

I would rewrite:
What=/dev/md126p1

to point to a symbolic link in /dev/disk
What=/dev/disk/by-..../....

See also the mount units generated from fstab by the fstab-generator in /run/systemd/generator.

Comment 11 Joshua Covington 2013-08-16 18:33:13 UTC
(In reply to Harald Hoyer from comment #9)
> well, you forgot:
> 
> [Unit]
> Conflicts=umount.target
> Before=umount.target

This didn't help. My mount point looks like this and it still corrups the array on reboot:

#  Bind and mount the Fedora Partition on the RAID1 - /dev/sd[b-c] 
#  to /dev/md126p1
#  /dev/md126p1 -> /mnt/linux
#  mount -t ext4 /dev/md126p1 /mnt/linux
#
# see 'man systemd.unit' for more information about the options

[Unit]
Description=Mount RAID1 /dev/md126p1 -> /mnt/linux
After=enumerate-mdadm-devices.service
After=multi-user.target
Conflicts=umount.target
Before=umount.target
DefaultDependencies=yes
# skip mounting if the directory does not exist or is a symlink
#ConditionPathIsDirectory=/mnt/linux
ConditionPathIsSymbolicLink=!/mnt/linux

[Install]
WantedBy=default.target

[Mount]
What=/dev/disk/by-uuid/642d9948-e8da-47e2-a4ec-225aff13b46a
Where=/mnt/linux
Type=ext4
#Options=size=12G

Something is definitely wrong with the shutdown process. When I manually unmount the volume I can see the plymouth graphic and everything is fine. With the above mount point I can't even see plymouth and as I already said the raid gets corrupted...

Any other ideas?

Comment 12 Harald Hoyer 2013-08-20 11:46:11 UTC
What does enumerate-mdadm-devices.service do?

Does this work?

[Unit]
Description=Mount RAID1 /dev/md126p1 -> /mnt/linux
Conflicts=umount.target
Before=umount.target
After=local-fs-pre.target
DefaultDependencies=no
ConditionPathIsSymbolicLink=!/mnt/linux

[Install]
WantedBy=default.target

[Mount]
What=/dev/disk/by-uuid/642d9948-e8da-47e2-a4ec-225aff13b46a
Where=/mnt/linux
Type=ext4


Can you attach the shutdown part of your journal?

# journalctl -a > journal.txt

Comment 13 Joshua Covington 2013-08-20 12:40:18 UTC
(In reply to Harald Hoyer from comment #12)
> What does enumerate-mdadm-devices.service do?

Nothing special, it goes through all devices in /dev/mdXXX and try to figure out which one is the container. Then it deassembles and reassembles the raid. I used this service in F17 and I never got an error. It works fine under f19 and has nothing to do with mounthing the raid.

> 
> Can you attach the shutdown part of your journal?
> 
> # journalctl -a > journal.txt

How to get this? Since I'm using a non-persistant livecd-image I don't know if I can get this during shutdown. Can I pause the shutdown process and query the journal? How?

Comment 14 Harald Hoyer 2013-08-20 14:33:09 UTC
(In reply to Joshua Covington from comment #13)
> (In reply to Harald Hoyer from comment #12)
> > What does enumerate-mdadm-devices.service do?
> 
> Nothing special, it goes through all devices in /dev/mdXXX and try to figure
> out which one is the container. Then it deassembles and reassembles the
> raid. I used this service in F17 and I never got an error. It works fine
> under f19 and has nothing to do with mounthing the raid.
> 
> > 
> > Can you attach the shutdown part of your journal?
> > 
> > # journalctl -a > journal.txt
> 
> How to get this? Since I'm using a non-persistant livecd-image I don't know
> if I can get this during shutdown. Can I pause the shutdown process and
> query the journal? How?

just do a normal reboot. most of the shutdown should be in the log

Comment 15 Joshua Covington 2013-08-20 14:59:03 UTC
(In reply to Harald Hoyer from comment #14)
> > > 
> > > Can you attach the shutdown part of your journal?
> > > 
> > > # journalctl -a > journal.txt
> > 
> > How to get this? Since I'm using a non-persistant livecd-image I don't know
> > if I can get this during shutdown. Can I pause the shutdown process and
> > query the journal? How?
> 
> just do a normal reboot. most of the shutdown should be in the log

It's a _non_-persistent LIVECD-image. I can't just reboot it, because there's nothing after it.

Comment 16 Zbigniew Jędrzejewski-Szmek 2013-08-21 02:52:38 UTC
Try following http://freedesktop.org/wiki/Software/systemd/Debugging/#index2h1,
except instead of saving to the filesystem, maybe send the journal contents over the network or something.

Comment 17 Harald Hoyer 2013-08-21 10:45:28 UTC
One more trick:

# mkdir -p /run/initramfs/etc/cmdline.d
# echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
# touch /run/initramfs/.need_shutdown

This should give you a late shell. Maybe you can mount something and copy over the journal output.

Comment 18 Joshua Covington 2013-08-21 19:55:12 UTC
(In reply to Harald Hoyer from comment #17)
> One more trick:
> 
> # mkdir -p /run/initramfs/etc/cmdline.d
> # echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
> # touch /run/initramfs/.need_shutdown
> 
> This should give you a late shell. Maybe you can mount something and copy
> over the journal output.

I executed these but never got the late shell. I tried even booting with "selinux=0 debug" on grub's command line without success. How to get this shell?

Comment 19 Joshua Covington 2013-08-26 09:42:54 UTC
I can ssh into the mashine. How to attach a serial console to it (the fastest and easiest way)?

Comment 20 Joshua Covington 2013-09-03 08:25:33 UTC
Ping on this.

Comment 21 Harald Hoyer 2013-09-03 08:47:20 UTC
Ok, do again:
# mkdir -p /run/initramfs/etc/cmdline.d
# echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
# touch /run/initramfs/.need_shutdown

And provide the output of all of these commands:

# systemctl start dracut-shutdown.service 
# systemctl status dracut-shutdown.service 
# journalctl -u dracut-shutdown.service 
# bash -x /usr/lib/dracut/dracut-initramfs-restore

Comment 22 Joshua Covington 2013-09-03 20:50:26 UTC
(In reply to Harald Hoyer from comment #21)
> Ok, do again:
> # mkdir -p /run/initramfs/etc/cmdline.d
> # echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
> # touch /run/initramfs/.need_shutdown
> 
> And provide the output of all of these commands:
> 
> # systemctl start dracut-shutdown.service 
> # systemctl status dracut-shutdown.service 
> # journalctl -u dracut-shutdown.service 
> # bash -x /usr/lib/dracut/dracut-initramfs-restore

[root@localhost ~]# mkdir -p /run/initramfs/etc/cmdline.d

[root@localhost ~]# echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf

[root@localhost ~]# cat /run/initramfs/etc/cmdline.d/debug.conf
rd.break=pre-shutdown

[root@localhost ~]# touch /run/initramfs/.need_shutdown

[root@localhost ~]# systemctl start dracut-shutdown.service
Job for dracut-shutdown.service failed. See 'systemctl status dracut-shutdown.service' and 'journalctl -xn' for details.

[root@localhost ~]# systemctl status dracut-shutdown.service
dracut-shutdown.service - Restore /run/initramfs
   Loaded: loaded (/usr/lib/systemd/system/../../dracut/modules.d/98systemd/dracut-shutdown.service; static)
   Active: failed (Result: exit-code) since Tue 2013-09-03 22:46:09 CEST; 21s ago
     Docs: man:dracut-shutdown.service(8)
  Process: 2005 ExecStart=/usr/lib/dracut/dracut-initramfs-restore (code=exited, status=1/FAILURE)

Sep 03 22:46:09 localhost systemd[1]: Starting Restore /run/initramfs...
Sep 03 22:46:09 localhost systemd[1]: dracut-shutdown.service: main process exited, code=exited, status=1/FAILURE
Sep 03 22:46:09 localhost systemd[1]: Failed to start Restore /run/initramfs.
Sep 03 22:46:09 localhost systemd[1]: Unit dracut-shutdown.service entered failed state.

[root@localhost ~]# journalctl -u dracut-shutdown.service
-- Logs begin at Tue 2013-09-03 22:42:47 CEST, end at Tue 2013-09-03 22:46:09 CEST. --
Sep 03 22:45:39 localhost systemd[1]: Starting Restore /run/initramfs...
Sep 03 22:45:39 localhost systemd[1]: dracut-shutdown.service: main process exited, code=exited, status=1/FAILURE
Sep 03 22:45:39 localhost systemd[1]: Failed to start Restore /run/initramfs.
Sep 03 22:45:39 localhost systemd[1]: Unit dracut-shutdown.service entered failed state.
Sep 03 22:45:56 localhost systemd[1]: Starting Restore /run/initramfs...
Sep 03 22:45:56 localhost systemd[1]: dracut-shutdown.service: main process exited, code=exited, status=1/FAILURE
Sep 03 22:45:56 localhost systemd[1]: Failed to start Restore /run/initramfs.
Sep 03 22:45:56 localhost systemd[1]: Unit dracut-shutdown.service entered failed state.
Sep 03 22:46:09 localhost systemd[1]: Starting Restore /run/initramfs...
Sep 03 22:46:09 localhost systemd[1]: dracut-shutdown.service: main process exited, code=exited, status=1/FAILURE
Sep 03 22:46:09 localhost systemd[1]: Failed to start Restore /run/initramfs.
Sep 03 22:46:09 localhost systemd[1]: Unit dracut-shutdown.service entered failed state.

[root@localhost ~]# bash -x /usr/lib/dracut/dracut-initramfs-restore
+ set -e
++ uname -r
+ KERNEL_VERSION=3.10.9-500.fc19.x86_64
+ [[ -f /etc/machine-id ]]
+ read MACHINE_ID
+ [[ -n 8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -d /boot/8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -L /boot/8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -f '' ]]
+ IMG=/boot/initramfs-3.10.9-500.fc19.x86_64.img
+ cd /run/initramfs
+ '[' -f .need_shutdown -a -f /boot/initramfs-3.10.9-500.fc19.x86_64.img ']'
+ exit 1
[root@localhost ~]#

Comment 23 Harald Hoyer 2013-09-04 06:24:37 UTC
So, I guess /boot/initramfs-3.10.9-500.fc19.x86_64.img does not exist, because this is a LiveCD...

So, this might work:

# ln -s /run/initramfs/live/isolinux/initrd0.img /boot/initramfs-$(uname -r).img
# mkdir -p /run/initramfs/etc/cmdline.d
# echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
# touch /run/initramfs/.need_shutdown

Comment 24 Joshua Covington 2013-09-04 07:32:48 UTC
(In reply to Harald Hoyer from comment #23)
> So, I guess /boot/initramfs-3.10.9-500.fc19.x86_64.img does not exist,
> because this is a LiveCD...
> 

Yes, you're right

> So, this might work:
> 
> # ln -s /run/initramfs/live/isolinux/initrd0.img /boot/initramfs-$(uname
> -r).img
> # mkdir -p /run/initramfs/etc/cmdline.d
> # echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
> # touch /run/initramfs/.need_shutdown

I'll test this later today when I get home and post the results.

Comment 25 Joshua Covington 2013-09-04 18:38:35 UTC
(In reply to Harald Hoyer from comment #23)
> So, I guess /boot/initramfs-3.10.9-500.fc19.x86_64.img does not exist,
> because this is a LiveCD...
> 
> So, this might work:
> 
> # ln -s /run/initramfs/live/isolinux/initrd0.img /boot/initramfs-$(uname
> -r).img
> # mkdir -p /run/initramfs/etc/cmdline.d
> # echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf
> # touch /run/initramfs/.need_shutdown

This is the output:

[root@localhost ~]# ln -s /run/initramfs/live/syslinux/initrd0.img /boot/initramfs-$(uname -r).img

[root@localhost ~]# dir /boot/
total 8580
dr-xr-xr-x.  5 root root    4096 Sep  4 20:35 .
drwxr-xr-x. 18 root root    4096 Sep  4  2013 ..
drwxr-xr-x.  4 root root    4096 Aug 25 00:47 efi
drwxr-xr-x.  2 root root    4096 Aug 25 00:45 extlinux
drwxr-xr-x.  3 root root    4096 May  9  2012 grub2
-rw-r--r--.  1 root root  128854 Aug 24 23:36 config-3.10.9-500.fc19.x86_64
-rw-r--r--.  1 root root  178176 Feb 16  2013 elf-memtest86+-4.20
lrwxrwxrwx.  1 root root      40 Sep  4 20:35 initramfs-3.10.9-500.fc19.x86_64.img -> /run/initramfs/live/syslinux/initrd0.img
-rw-r--r--.  1 root root  557870 Aug 25 00:47 initrd-plymouth.img
-rw-r--r--.  1 root root  176500 Feb 16  2013 memtest86+-4.20
-rw-------.  1 root root 2634385 Aug 24 23:36 System.map-3.10.9-500.fc19.x86_64
-rwxr-xr-x.  1 root root 5068496 Aug 24 23:36 vmlinuz-3.10.9-500.fc19.x86_64
-rw-r--r--.  1 root root     167 Aug 24 23:36 .vmlinuz-3.10.9-500.fc19.x86_64.hmac

[root@localhost ~]# echo "rd.break=pre-shutdown" > /run/initramfs/etc/cmdline.d/debug.conf

[root@localhost ~]# cat /run/initramfs/etc/cmdline.d/debug.conf
rd.break=pre-shutdown

[root@localhost ~]# touch /run/initramfs/.need_shutdown

[root@localhost ~]# systemctl start dracut-shutdown.service

[root@localhost ~]# systemctl status dracut-shutdown.service
dracut-shutdown.service - Restore /run/initramfs
   Loaded: loaded (/usr/lib/systemd/system/../../dracut/modules.d/98systemd/dracut-shutdown.service; static)
   Active: active (exited) since Wed 2013-09-04 20:35:37 CEST; 6s ago
     Docs: man:dracut-shutdown.service(8)
  Process: 2062 ExecStart=/usr/lib/dracut/dracut-initramfs-restore (code=exited, status=0/SUCCESS)

Sep 04 20:35:37 localhost systemd[1]: Started Restore /run/initramfs.

[root@localhost ~]# journalctl -u dracut-shutdown.service
-- Logs begin at Wed 2013-09-04 20:28:45 CEST, end at Wed 2013-09-04 20:35:37 CEST. --
Sep 04 20:35:36 localhost systemd[1]: Starting Restore /run/initramfs...
Sep 04 20:35:37 localhost systemd[1]: Started Restore /run/initramfs.

[root@localhost ~]# bash -x /usr/lib/dracut/dracut-initramfs-restore
+ set -e
++ uname -r
+ KERNEL_VERSION=3.10.9-500.fc19.x86_64
+ [[ -f /etc/machine-id ]]
+ read MACHINE_ID
+ [[ -n 8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -d /boot/8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -L /boot/8a429b36eb3b4cb592e05f136922d380 ]]
+ [[ -f '' ]]
+ IMG=/boot/initramfs-3.10.9-500.fc19.x86_64.img
+ cd /run/initramfs
+ '[' -f .need_shutdown -a -f /boot/initramfs-3.10.9-500.fc19.x86_64.img ']'
+ exit 1

[root@localhost ~]#

Comment 26 Joshua Covington 2013-09-04 20:59:22 UTC
Created attachment 793837 [details]
dmesg when booting with systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M enforcing=0

I booted the system with "systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M enforcing=0" and after executing your commands got the late shell. This is the output from it. This time the pc didn't start to resync the raid after copying the log and forcibly powering it off. Maybe the missing part was/is the absence of initramfs-3.10.9-500.fc19.x86_64.img???

Comment 27 Joshua Covington 2013-09-04 21:07:28 UTC
I can confirm that executing "ln -s /run/initramfs/live/syslinux/initrd0.img /boot/initramfs-$(uname -r).img" before reboot/halt/poweroff fixes the problem. The raid doesn't start to resync on the next reboot. Now dracut should be taught about this.

Comment 28 Harald Hoyer 2013-09-05 12:07:40 UTC
(In reply to Joshua Covington from comment #27)
> I can confirm that executing "ln -s /run/initramfs/live/syslinux/initrd0.img
> /boot/initramfs-$(uname -r).img" before reboot/halt/poweroff fixes the
> problem. The raid doesn't start to resync on the next reboot. Now dracut
> should be taught about this.

Well, well... I will reassign this to mdadm.

The md raid is _not_ part of the root filesystem, so dracut is not responsible for it.

It just so happens, that dracut has code in the shutdown routine, which shuts down all raid arrays and waits for them to be clean.

For non-root disks this should be part of an mdadm service.

You can boot a system without an initramfs, and even if you boot with an initramfs, and the raid is not part of the root filesystem, dracut will not take care of it, if no other part of the root filesystem needs the dracut shutdown.

So, I would add an mdraid-shutdown.service with something like this:

[Unit]
Description=Stop MD RAID arrays
DefaultDependencies=no
After=shutdown.target
Before=final.target

[Service]
Type=oneshot
ExecStart=-/usr/sbin/mdadm --wait-clean --scan
ExecStart=-/usr/sbin/mdadm --stop --scan
RemainAfterExit=yes
TimeoutSec=0

[Install]
WantedBy=final.target


Problem with this might be, that it tries to shutdown a MD RAID, where the root filesystem lives on.

Comment 29 Joshua Covington 2013-09-05 20:18:19 UTC
(In reply to Harald Hoyer from comment #28)
> (In reply to Joshua Covington from comment #27)
> > I can confirm that executing "ln -s /run/initramfs/live/syslinux/initrd0.img
> > /boot/initramfs-$(uname -r).img" before reboot/halt/poweroff fixes the
> > problem. The raid doesn't start to resync on the next reboot. Now dracut
> > should be taught about this.
> 
> Well, well... I will reassign this to mdadm.
> 
> The md raid is _not_ part of the root filesystem, so dracut is not
> responsible for it.
> 
> It just so happens, that dracut has code in the shutdown routine, which
> shuts down all raid arrays and waits for them to be clean.
> 
> For non-root disks this should be part of an mdadm service.
> 
> You can boot a system without an initramfs, and even if you boot with an
> initramfs, and the raid is not part of the root filesystem, dracut will not
> take care of it, if no other part of the root filesystem needs the dracut
> shutdown.

What about if I mount a single ssd/hdd somewhere that is not part of the root filesystem? Who should take care of unmounting it? Will dracut do this?

Should systemd take care of it since I mount this with a mount unit, so this unit should somehow be unmounted before shutting down the pc? If dracut doesn't unmount my partiton then I'll get a corrupted filesystem.

I think systemd should take care of all mounting units it mounts so that they are properly unmounted at the end, am I right?

Comment 30 Doug Ledford 2013-09-05 20:24:41 UTC
systemd normally unmounts stuff, all except for the rootfs.  I think what's needed is to hook into systemd's unmount unit with some additional commands similar to what Harald listed in comment #28, and it should be run on the specific device being unmounted, not on all devices.  I have no idea how to make that happen though, as each filesystem is a udev dbus generated unit I think, and so there is no unmount unit that can be edited.

Comment 31 zadeluca 2013-12-02 21:50:56 UTC
I posted in Bug #753335, but as I begin to understand this better, I think this is more relevant to my situation.

I see the exact same behavior as Joshua, except I am using Fedora 19 installed and not the livecd.

Some info:


cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb[1] sdc[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      [===>.................]  resync = 19.3% (378663936/1953511424) finish=193.6min speed=135512K/sec
      
md127 : inactive sdb[1](S) sdc[0](S)
      6306 blocks super external:imsm
       
unused devices: <none>


/dev/md126 (/dev/md/Volume0) uses all of sdb and sdc (no partitions) and is mounted by UUID to /data
cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Wed Nov 13 18:18:59 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=c0b84d45-4e88-4608-b41c-be40bd48e124 /        ext4    defaults        1 1
UUID=09277dca-a225-4e7e-909c-3ec62e61eac6 /boot    ext4    defaults        1 2
UUID=16527045-c93b-4697-bb35-fd6762bba035 swap     swap    defaults        0 0
UUID=a4c9104f-dd82-4686-9284-f7b9e42f4763 /data    ext4    defaults        0 2


Since (according to Harald) this pertains to non-root filesystems, I would not call the severity high because (once you understand what is going on) you can workaround the issue by simply unmounting all non-root RAID filesystems manually before reboot/shutdown. Obviously this is not a solution but it works for me in the interim. If there is anything else I can contribute to get this fixed, please let me know.

Thanks,
Zach

Comment 32 Fedora End Of Life 2015-01-09 19:26:05 UTC
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 33 Spenk 2015-02-07 21:43:28 UTC
My RAID booting to FC 21 has kept degrading over the last months a result of failed unmounts. So I guess this problem is still quite alive. 

As a matter of fact, yesterday I found the RAID was not only degraded, but the mirrored disks were not recognized to be in RAID set at all. GRUB wouldn't boot.

I started a rescue installation from a separate disk and gnome-disks shows only free space where my btrfs partition used to be. So much for RAID data security.

Comment 34 Fedora End Of Life 2015-11-04 13:44:26 UTC
This message is a reminder that Fedora 21 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 21. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '21'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 21 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 35 Fedora End Of Life 2015-12-02 02:55:33 UTC
Fedora 21 changed to end-of-life (EOL) status on 2015-12-01. Fedora 21 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.