Bug 713224 - Hang during reboot - unable to umount partition
Summary: Hang during reboot - unable to umount partition
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: dracut
Version: 15
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: dracut-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-14 18:05 UTC by Marcin Lewandowski
Modified: 2012-08-07 19:50 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 753339 (view as bug list)
Environment:
Last Closed: 2012-08-07 19:50:29 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Marcin Lewandowski 2011-06-14 18:05:16 UTC
Description of problem:
My Fedora doesn't restart. Reboot sequence fall into infinite loop when trying to umount partition mounted to /home/ml054/mirror. 

Version-Release number of selected component (if applicable):
Fedora 15, I installed fedora 12, then updated to 13, 14 and now to 15.

How reproducible:
It's hard to say, I think that it's caused by error in systemd or it's caused by incorrect configuration. However when I perform: umount /home/ml054/mirror and then reboot, then restart is performed correctly. there isn't hang. 


Steps to Reproduce:
1. Just try to reboot my computer. 
2.
3.
  
Actual results:
Hang

Expected results:
Normal reboot

Additional info:

What additional informations should I provide?

Comment 1 Chris 2011-06-18 06:51:35 UTC
I'm having similar issues that are related to my raid array.  When I let systemd mount the raid, it fails to unmount it, eventually timing out.  But when the shutdown sequence gets to "Unmounting file systems.", I then get "EXT4-fs (sda5): re-mounted, Opts: (null)", which presumably is causing the hang.

Here's what appears to be the relevant information from the shutdown:
Stopping sandbox     [ OK ]
[  835.418833] systemd[1]: mnt-data.mount unmounting timed out. Stopping
[  925.419449] systemd[1]: mnt-data.mount unmounting timed out. Killing
[ 1015.428862] systemd[1]: mnt-data.mount mount process still around after SIGKILL. Ignoring.
[ 1015.420520] systemd[1]: Unit mnt-data.mount entered failed state.
[ 1015.444267] systemd[1]: Shutting down.
Sending SIGTERM to remaining processes...
Sending SIGKILL to remaining processes...
Unmounting file systems.
[ 1025.674507] EXT4-fs (sda5): re-mounted, Opts: (null)
Disabling swaps.
Detaching loop devices.
Detacning DM devices.
<computer hangs>

If I run:
> systemctl stop mnt-data.mount
> mount /dev/md126p1 /mnt/data
I get a normal shutdown, even though the drive is mounted.  It appears to be that systemd's mnt-data.mount unit is causing the hang, whereas the normal "Unmounting file systems" process (is it its own unit?) successfully unmounts my raid.  I'm new to systemd though, so I don't know what code is running to even diagnose the problem.  "find /lib/systemd -name mnt-data.mount" doesn't return anything, presumably because the unit is created dynamically based on the contents of /etc/fstab.

Reproducable: Always

Comment 2 Michael Class 2011-06-19 11:33:14 UTC
Hello,

I can report the same behaviour (system hangs during reboot; I can get around when I press CTRL-ALT-DEL at the hanging state)

It is a fully patched FC15 system as of June 18th. The hang is caused by NFS mounts in /etc/fstab

With the following fragment in /etc/fstab the issue occurs:

vdr:/medien/video   /video/vdr  nfs	bg,soft,intr	0	0	
vdr:/medien/audio   /audio	nfs	bg,soft,intr	0	0
vdr:/medien/pics    /pics	nfs	bg,soft,intr	0	0	
# for nfsv4
/video    /srv/nfsv4/video        none    bind    0       0
# end nfsv4

Without the NFS mounts everything is fine.

Cheers,
Michael

Comment 3 Michael Class 2011-06-19 11:46:14 UTC
Hello,

actually checked something more:

It is just the the following entry in /etc/fstab that causes the hang on reboot:

/video    /srv/nfsv4/video        none    bind    0       0


I do see this behaviour 100% reproducable on two different machines.

Cheers,
Michael

Comment 4 Michal Schmidt 2011-06-29 23:13:53 UTC
Everyone,
please attach your complete /etc/fstab to this bug.
Do you have lvm2-monitor.service active?

Comment 5 Michael Class 2011-06-30 08:08:24 UTC
As requested:

#
# /etc/fstab
# Created by anaconda on Thu Nov 27 06:19:51 2008
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or vol_id(8) for more info
#
UUID=a5424e10-fdcf-403c-bbcb-9cd12362dee4 /     ext4    defaults        1 1
UUID=fec821ba-e502-4875-8f69-56294209bceb /boot ext3    defaults        1 2
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
sysfs                   /sys                    sysfs   defaults        0 0
UUID=8659bd70-5346-4d2d-be84-123e7ee55959 swap  swap    defaults        0 0

# for nfsv4
/home    /srv/nfsv4/home	none	rw,bind	0	0	
/medien  /srv/nfsv4/medien	none	rw,bind	0	0
# end nfsv4
[michaelc@vdr ~]$ chkconfig --list | fgrep lvm

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

lvm2-monitor   	0:Aus	1:Ein	2:Ein	3:Ein	4:Ein	5:Ein	6:Aus


Cheers,
Michael

Comment 6 Michal Schmidt 2011-06-30 11:56:22 UTC
Michael,
in comment #2 you had some NFS mounts from "vdr:/...". Did you remove them?
/home and /medien are simply directories on the root filesystem? Not separate mounts?

> [michaelc@vdr ~]$ chkconfig --list | fgrep lvm

I'd rather see: systemctl status lvm2-monitor.service
Thanks!

Comment 7 Marcin Lewandowski 2011-06-30 12:20:31 UTC
I have following configuration:


UUID=e6770ed7-bb20-4bb7-89b9-c2b40f04ddf8       /       ext3    defaults        1       1
/dev/md125p6    swap    swap    defaults        0       0
tmpfs   /dev/shm        tmpfs   defaults        0       0
devpts  /dev/pts        devpts  gid=5,mode=620  0       0
#devpts options modified by setup update to fix #515521 ugly way
sysfs   /sys    sysfs   defaults        0       0
proc    /proc   proc    defaults        0       0
#/dev/md126p1   /home/ml054/lustro      ext3    defaults        1       1
UUID=01a7fef3-8b87-4fb9-9ec8-211445d5b0b2       /home/ml054/lustro      ext3    defaults        1       1


and 


[ml054@raptor ~]$ systemctl status lvm2-monitor.service
lvm2-monitor.service - LSB: Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
          Loaded: loaded (/etc/rc.d/init.d/lvm2-monitor)
          Active: active (exited) since Thu, 30 Jun 2011 13:02:37 +0200; 1h 16min ago
         Process: 981 ExecStart=/etc/rc.d/init.d/lvm2-monitor start (code=exited, status=0/SUCCESS)
          CGroup: name=systemd:/system/lvm2-monitor.service


When I perform umount /home/ml054/lustro before restart then my computer is restarted correctly. In other case it doesn't.

Comment 8 Michael Class 2011-06-30 12:31:54 UTC
Hello,


[michaelc@vdr ~]$ sudo systemctl status lvm2-monitor.service
lvm2-monitor.service - LSB: Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
	  Loaded: loaded (/etc/rc.d/init.d/lvm2-monitor)
	  Active: active (exited) since Thu, 23 Jun 2011 09:00:47 +0200; 1 weeks and 0 days ago
	  CGroup: name=systemd:/system/lvm2-monitor.service


About your comment on the NFS mounts. Sorry I grabed the fstab from the other machine that expieriences the same behaviour. The culprit are the "bind" mounts. 
The original machine is not online (and I am currently traveling ...)

Cheers,
Michael

Comment 9 Michal Schmidt 2011-06-30 12:38:24 UTC
(In reply to comment #7)
> UUID=01a7fef3-8b87-4fb9-9ec8-211445d5b0b2       /home/ml054/lustro      ext3   

Marcin,
could you describe your disk layout? I can see you have some md RAID arrays. Is the filesystem for /home/ml054/lustro also located on an md device? Are any of the md arrays monitored by mdmon?

Comment 10 Marcin Lewandowski 2011-06-30 13:06:57 UTC
[root@raptor ml054]# cat /proc/mdstat 
Personalities : [raid1] [raid0] 
md125 : active raid0 sda[1] sdb[0]
      629145600 blocks super external:/md127/0 128k chunks
      
md126 : active raid1 sda[1] sdb[0]
      173807616 blocks super external:/md127/1 [2/2] [UU]
      
md127 : inactive sdb[1](S) sda[0](S)
      4514 blocks super external:imsm
       
unused devices: <none>



[root@raptor ml054]# fdisk -l
Warning: invalid flag 0x0000 of partition table 5 will be corrected by w(rite)

Disk /dev/sda: 500.1 GB, 500106780160 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976771055 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x29711a93

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   409602047   204800000    7  HPFS/NTFS/exFAT
/dev/sda2   *   409602048   886032944   238215448+  83  Linux
/dev/sda4       886032945  1258291124   186129090    f  W95 Ext'd (LBA)

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md126: 178.0 GB, 177978998784 bytes
255 heads, 63 sectors/track, 21638 cylinders, total 347615232 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00006d0d

      Device Boot      Start         End      Blocks   Id  System
/dev/md126p1              63   347614469   173807203+  83  Linux

Disk /dev/md125: 644.2 GB, 644245094400 bytes
255 heads, 63 sectors/track, 78325 cylinders, total 1258291200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk identifier: 0x29711a93

      Device Boot      Start         End      Blocks   Id  System
/dev/md125p1            2048   409602047   204800000    7  HPFS/NTFS/exFAT
/dev/md125p2   *   409602048   886032944   238215448+  83  Linux
/dev/md125p4       886032945  1258291124   186129090    f  W95 Ext'd (LBA)
Partition 4 does not start on physical sector boundary.
/dev/md125p5       919608795  1258275059   169333132+   7  HPFS/NTFS/exFAT
Partition 5 does not start on physical sector boundary.
/dev/md125p6       886049073   919608794    16779861   82  Linux swap / Solaris
Partition 6 does not start on physical sector boundary.

Partition table entries are not in disk order

Comment 11 Chris 2011-07-01 04:08:01 UTC
I have 1 hard drive with a bunch of different partitions on it that I boot from, the a 2 drive raid mirror.

I also see an error from lvm2-monitor.service during shutdown:
Not stopping monitoring, this is a dangerous operation.  Please use force-stop to override.
systemd[1]: lvm2-monitor.service: control process exited, code=exited status=1
systemd[1]: Unit lvm2-monitor.service entered failed state.

Here's the info on fstab, disk layout, and the lvm2-monitor.service:

> systemctl status lvm2-monitor.service
lvm2-monitor.service - LSB: Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
          Loaded: loaded (/etc/rc.d/init.d/lvm2-monitor)
          Active: active (exited) since Thu, 30 Jun 2011 20:51:46 -0700; 4min 13s ago
         Process: 828 ExecStart=/etc/rc.d/init.d/lvm2-monitor start (code=exited, status=0/SUCCESS)
          CGroup: name=systemd:/system/lvm2-monitor.service

-------------

> cat /etc/fstab
UUID=d523dccd-94b1-4a84-bbb9-36edca9c712f /                       ext4    defaults        1 1
UUID=219e07b4-6406-48d3-b21a-380631a48c60 /boot                   ext3    defaults        1 2
UUID=2e6810f7-8cce-4392-8e46-4d59e1430302 /mnt/lfs                ext3    defaults        1 2
/dev/md126p1                              /mnt/data               ext3    defaults        1 2
UUID=89e8c5be-0c12-45d8-b094-aa68d04c7a94 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

---------------
fdisk -l gave the following warning:
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

so here's the output of parted -l:
> parted -l
Model: ATA WDC WD1200JB-00G (scsi)
Disk /dev/sda: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
 1      32.3kB  39.9GB  39.9GB  primary   ntfs            boot
 2      39.9GB  79.9GB  39.9GB  primary   hfs+
 3      79.9GB  80.0GB  107MB   primary   ext3
 4      80.0GB  120GB   40.0GB  extended                  lba
 5      80.0GB  104GB   24.1GB  logical   ext4
 6      104GB   118GB   13.8GB  logical   ext3
 7      118GB   120GB   2147MB  logical   linux-swap(v1)


Model: ATA ST3320620AS (scsi)
Disk /dev/sdb: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type     File system  Flags
 1      32.3kB  320GB  320GB  primary  ext3         boot


Model: ATA ST3320620AS (scsi)
Disk /dev/sdc: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type     File system  Flags
 1      32.3kB  320GB  320GB  primary  ext3


Error: /dev/md127: unrecognised disk label                                
Warning: Error fsyncing/closing /dev/md127: Input/output error            
Retry/Ignore? i                                                           

Model: Linux Software RAID Array (md)
Disk /dev/md126: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size   Type     File system  Flags
 1      32.3kB  320GB  320GB  primary  ext3         boot

Comment 12 Michal Schmidt 2011-07-01 11:15:53 UTC
(In reply to comment #10)
> md127 : inactive sdb[1](S) sda[0](S)
>       4514 blocks super external:imsm

I see. It's an array with external metadata (Intel Matrix Storage). This depends on the mdmon daemon. systemd may be doing something wrong in this case. I'll see if I can test it myself.

(In reply to comment #11)
> I also see an error from lvm2-monitor.service during shutdown:
> Not stopping monitoring, this is a dangerous operation.  Please use force-stop
> to override.

Chris, you're seeing bug 681582.

Comment 13 Peter Bieringer 2011-07-03 19:22:53 UTC
I'm hit by the same problem, also using software RAID and having "mdmon md0" in process table. It worked fine with F14, but upgrading to F15 make system not longer usable as desktop system, have always use SYSRQ keys to get the box off :(

Comment 14 Peter Bieringer 2011-07-03 19:24:35 UTC
with "software RAID" I mean also the cheap Intel Matrix Storage.

Comment 15 Jason Harvey 2011-07-05 13:05:22 UTC
Have been having this problem too since upgrading from F14.
System often not halting cleanly, hangs at unmounting filesystems and needs an Alt-SysRq-K etc to shutdown fully. The array was rarely cleanly shut down and constantly rebuilding on next boot.
A fresh F15 install onto a test machine with isw raid also showed same problems.

Current workaround working for me is switching to using dmraid and turning off mdraid. This required creating new initramfs 

dracut -v -f -o mdraid -a dmraid initramfs-dmraid.img

and some change to grub.conf for the new initramfs and dracut option changes

rd_DM_UUID=isw_ccchgfgdia_vol0 rd_NO_MDIMSM


I preferred mdraid controlling things, I'm happy to run tests to help to get it back to working as it did in F14.

Comment 16 Peter Bieringer 2011-07-08 04:47:51 UTC
(In reply to comment #15)
 
> and some change to grub.conf for the new initramfs and dracut option changes
> 
> rd_DM_UUID=isw_ccchgfgdia_vol0 rd_NO_MDIMSM

is "isw_ccchgfgdia_vol0" a special token? Or was this only an exchange from

rd_MD_UUID=isw_ccchgfgdia_vol0

to

rd_DM_UUID=isw_ccchgfgdia_vol0


If so, than your hint causes at least on my system a damaged / filesystem. Before filesystem crashed I saw that the /dev/sdb1 was mounted for / and not a raid device...

Comment 17 Jason Harvey 2011-07-08 08:03:27 UTC
isw_ccchgfgdia_vol0 is the name of the array in my system.
To find yours you need to use
# dmraid -s 

It's not a great solution. If you ever rebuild the array from within bios the name will get changed and boot to fail to find it. I imagine many people using isw arrays are dual booting with Windows, to keep the name from being changed resync-ing in Windows is the safest option.

grub.conf should not have rd_NO_DM in it and no rd_MD_UUID=xxxx entries for any isw arrays.

(Mandriva bugzilla https://qa.mandriva.com/show_bug.cgi?id=61857 is the same problem too.)

Comment 18 Stanislaw Czech 2011-07-09 14:01:18 UTC
I have the same problem:

[root@kitana ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda[1] sdb[0]
      58612736 blocks super external:/md0/0 [2/2] [UU]

md0 : inactive sdb[1](S) sda[0](S)
      4514 blocks super external:imsm

[root@kitana sysconfig]# fdisk -l

Disk /dev/sda: 60.0 GB, 60022480896 bytes
255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c7da

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   117225471    58099712   83  Linux

Disk /dev/sdb: 60.0 GB, 60022480896 bytes
255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c7da

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     1026047      512000   83  Linux
/dev/sdb2         1026048   117225471    58099712   83  Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x54019fd6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     1026047      512000   83  Linux
/dev/sdc2         1026048   103424295    51199124   83  Linux

Disk /dev/md127: 60.0 GB, 60019441664 bytes
2 heads, 4 sectors/track, 14653184 cylinders, total 117225472 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005c7da

      Device Boot      Start         End      Blocks   Id  System
/dev/md127p1   *        2048     1026047      512000   83  Linux
/dev/md127p2         1026048   117225471    58099712   83  Linux

[root@kitana ~]# cat /etc/fstab
UUID=e71ce8a5-baac-4289-acc6-f8076d40e34f /                       ext4    noatime,nodiratime,discard,errors=remount-ro        1 1
UUID=079475f8-33eb-47a8-873e-6aef75289779 /boot                   ext4    noatime,nodiratime,discard,errors=remount-ro        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0


Clean FC15 install...

Comment 19 Peter Bieringer 2011-09-11 16:22:48 UTC
Very strange is that "poweroff" works fine, but "reboot" hangs on "Unmounting file system." - what's different in the scripts here?

Comment 20 Fedora Admin XMLRPC Client 2011-10-20 16:28:06 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 21 Gerhard 2011-11-03 23:20:48 UTC
Hej folks,

similar problems here (have an Intel 82801 SATA RAID Controller). But I also get them on poweroff.
I boot my system from an MD-Raid and about 70% of all shutdowns or reboots it hangs while displaying: Unmounting file systems.
If this succeed (the last 30%), all the detaching of swap, loop and DM-devices works fine and it turns off.

If I can provide some more infos, then let me know.

Thx x lot! :-)

Comment 22 john getsoian 2011-11-06 07:23:34 UTC
Another victim here. I've just installed a 'fake (intel) raid' on an ASUS p7p55 system. Only data is on the raid - so I can manually umount the raid partitions before shutdown, and then I get a clean exit, but that is the only way. Tried to put the umounts into a shutdown script but no luck - ran into timing issues I guess.

j getsoian

Comment 23 Jóhann B. Guðmundsson 2012-01-25 14:03:22 UTC
Is this still a problem or can this bug be closed?

Comment 24 Peter Bieringer 2012-01-25 20:43:04 UTC
Yes, this problem still exists (at least for me on my IMSM raid system) and the bug should not be closed. I'm still waiting for next hints or solutions...applying update all the time, but no improvement until now.

Comment 25 Jes Sorensen 2012-01-25 20:49:35 UTC
Definitely shouldn't be closed.

We're working on the problem, but it is complex since it involves all
three of dracut/mdadm/systemd.

The changes needed for systemd are now in rawhide, I have posted patches
for mdadm, and once we have those agreed upon upstream I will push them
into rawhide too. Then we need dracut to use the new parameter introduced.

It will be a little while longer I am afraid, but the problem hasn't been
forgotten.

Jes

Comment 26 Jóhann B. Guðmundsson 2012-01-25 21:08:02 UTC
Are those patches something that will find it's way down to F15/F16 in their relevant components or should I move this bug against rawhide?

Comment 27 Jes Sorensen 2012-01-26 07:45:17 UTC
Johann,

The plan is to let it ripple down, but I want to make sure we get it right
first. Don't want to end up messing up peoples' raids.

Cheers,
Jes

Comment 28 Michal Schmidt 2012-02-27 09:00:46 UTC
The fix is the full implementation of http://www.freedesktop.org/wiki/Software/systemd/RootStorageDaemons

We have the systemd parts going to F15 updates-testing:
https://admin.fedoraproject.org/updates/systemd-26-17.fc15

We have the mdadm parts already in F15 updates-testing:
https://admin.fedoraproject.org/updates/FEDORA-2012-2376

We're missing the corresponding dracut update. Reassigning to dracut.

Comment 29 Fedora End Of Life 2012-08-07 19:50:33 UTC
This message is a notice that Fedora 15 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 15. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. At this time, all open bugs with a Fedora 'version'
of '15' have been closed as WONTFIX.

(Please note: Our normal process is to give advanced warning of this
occurring, but we forgot to do that. A thousand apologies.)

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, feel free to reopen
this bug and simply change the 'version' to a later Fedora version.

Bug Reporter: Thank you for reporting this issue and we are sorry that
we were unable to fix it before Fedora 15 reached end of life. If you
would still like to see this bug fixed and are able to reproduce it
against a later version of Fedora, you are encouraged to click on
"Clone This Bug" (top right of this page) and open it against that
version of Fedora.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

The process we are following is described here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping


Note You need to log in before you can comment on or make changes to this bug.