Bug 912735 - System with Intel firmware RAID-1 does not mount /home on boot (udev/systemd race with mdadm issue)
Summary: System with Intel firmware RAID-1 does not mount /home on boot (udev/systemd ...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 23
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: udev-maint
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 1203049 1281535 (view as bug list)
Depends On:
Blocks: 1330497
TreeView+ depends on / blocked
 
Reported: 2013-02-19 14:39 UTC by Doug Ledford
Modified: 2016-12-20 12:35 UTC (History)
35 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 879327
: 1203049 1208680 1330497 (view as bug list)
Environment:
Last Closed: 2016-12-20 12:35:30 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
dracut log /run/initramfs/rdsosreport (83.19 KB, text/plain)
2015-07-19 22:08 UTC, Alessandro Selli
no flags Details
boot error (3.73 MB, image/jpeg)
2016-01-16 09:30 UTC, Simone Caronni
no flags Details

Description Doug Ledford 2013-02-19 14:39:49 UTC
--- Additional comment from Tony Marchese on 2013-02-19 09:26:41 EST ---

I am now running:

mdadm-3.2.6-14.fc18.x86_64
dracut-024-25.git20130205.fc18.x86_64
kernel-3.7.8-202.fc18.x86_64

I overlooked that mdadm-3.2.6-14.fc18.x86_64 and dracut-024-25.git20130205.fc18.x86_64 were only available through the updates-testing repo. After installation I have been several times rebooted running dracut -f and issuing the command mdmon --all --takeover --offroot in the pre-mount shell invoked through boot parameter rd.break=pre-mount

Here is my fstab:


# /etc/fstab
# Created by anaconda on Tue Feb 12 19:33:07 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=7afe1956-b93a-4bdf-bb5b-83f0ae011a83 /                       ext4    defaults        1 1
UUID=db1a19a4-8db3-4d12-be7c-bcc28e3ce471 /boot                   ext4    defaults        1 2
UUID=3393924f-fdd0-4150-8e23-55f8fd679f1e swap                    swap    defaults        0 0
UUID=7f6f71c8-784e-4ec7-bdc0-11a48b6fa9e7 /home 		  ext4	  defaults,nofail 0 2

# mdadm -D /dev/md126
/dev/md126:
      Container : /dev/md/imsm0, member 0
     Raid Level : raid1
     Array Size : 1953511424 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511556 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2

          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


           UUID : 6862ff21:72014ea5:67fa1e10:f1d2a26b
    Number   Major   Minor   RaidDevice State
       1       8       16        0      active sync   /dev/sdb
       0       8       32        1      active sync   /dev/sdc

# mdadm -D /dev/md127
/dev/md127:
        Version : imsm
     Raid Level : container
  Total Devices : 2

Working Devices : 2


           UUID : bd9c2866:4fbc5b7b:3ba8e429:d291d6d7
  Member Arrays : /dev/md/Volume0_0

    Number   Major   Minor   RaidDevice

       0       8       16        -        /dev/sdb
       1       8       32        -        /dev/sdc


Th behaviour is actually that the system boots (the nofail in fstab helps), but the raid-1 volume is not mounted. Below is an extract from my journalctl -xb

...skipping...
feb 19 15:00:50 tonyhome kernel: md/raid1:md126: active with 2 out of 2 mirrors
feb 19 15:00:50 tonyhome kernel: md126: detected capacity change from 0 to 2000395698176
feb 19 15:00:50 tonyhome kernel:  md126: unknown partition table
feb 19 15:00:50 tonyhome kernel: asix 2-5.3:1.0 eth0: register 'asix' at usb-0000:00:1d.7-5.3, ASIX AX88772 USB 2.0 Ethernet, 00:50:b6:54:89:0c
feb 19 15:00:50 tonyhome kernel: usbcore: registered new interface driver asix
feb 19 15:00:50 tonyhome kernel: Adding 14336916k swap on /dev/sda3.  Priority:-1 extents:1 across:14336916k SS
feb 19 15:00:50 tonyhome systemd-fsck[593]: /dev/sda1: clean, 375/128016 files, 165808/512000 blocks
feb 19 15:00:50 tonyhome systemd-fsck[599]: /dev/md126 is in use.
feb 19 15:00:50 tonyhome systemd-fsck[599]: e2fsck: Impossibile continuare, operazione annullata.
feb 19 15:00:50 tonyhome systemd-fsck[599]: fsck failed with error code 8.
feb 19 15:00:50 tonyhome systemd-fsck[599]: Ignoring error.
feb 19 15:00:50 tonyhome mount[606]: mount: /dev/md126 is already mounted or /home busy
feb 19 15:00:50 tonyhome kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
feb 19 15:00:50 tonyhome kernel: SELinux: initialized (dev sda1, type ext4), uses xattr
feb 19 15:00:50 tonyhome kernel: md: export_rdev(sdc)
feb 19 15:00:50 tonyhome kernel: md: export_rdev(sdb)
feb 19 15:00:50 tonyhome kernel: md: md126 switched to read-write mode.
feb 19 15:00:51 tonyhome kernel: input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:03.0/0000:03:00.1/sound/card2/input17
feb 19 15:00:51 tonyhome kernel: input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:03.0/0000:03:00.1/sound/card2/input18
feb 19 15:00:51 tonyhome kernel: input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:03.0/0000:03:00.1/sound/card2/input19
feb 19 15:00:51 tonyhome kernel: input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:03.0/0000:03:00.1/sound/card2/input20
feb 19 15:00:51 tonyhome fedora-storage-init[627]: Impostazione del Logical Volume Management:   No volume groups found
feb 19 15:00:51 tonyhome fedora-storage-init[627]: [  OK  ]
feb 19 15:00:51 tonyhome fedora-storage-init[635]: Impostazione del Logical Volume Management:   No volume groups found
feb 19 15:00:51 tonyhome fedora-storage-init[635]: [  OK  ]
feb 19 15:00:51 tonyhome lvm[642]: No volume groups found
feb 19 15:00:51 tonyhome auditd[645]: Started dispatcher: /sbin/audispd pid: 648
...skipping...

Afterwords I can login in the system as root and I can manually run mount -a which normally mount the raid-1 volume in /home

Feb 19 15:01:38 tonyhome kernel: [   55.531726] EXT4-fs (md126): mounted filesystem with ordered data mode. Opts: (null)

From there on the system works normally until next reboot...

I don't know whether this issue is still related to this bug or it is about something else.
Thank you for analyzing!

--- Additional comment from Doug Ledford on 2013-02-19 09:32:39 EST ---

Tony, since your problem is occurring with the latest software, and given that your problem (now) is not the same as the one in this bug report, I'm cloning just your last comment into a new bug.

Comment 1 Doug Ledford 2013-02-19 14:50:33 UTC
The basic problem here, according to the logs, is that mdadm is creating a new device as a result of a udev event on a different device, and during that creation process mdadm holds an exclusive open on the new device's device file.  Systemd (or udev, however you want to look at things), being the speedy little daemon that they are, do not wait for mdadm to complete the creation process on the device and so they attempt to open it before mdadm has released it's exclusive open, fail, and the system then does not mount the raid device.  Of course, less than a second later, mdadm finishes, exclusive open is released, and so when the user attempts to mount things manually, it all just works.

Or I guess the problem could be systemd and not udev.  If the newly created device is not having fsck run on it as a result of a udev rule, but instead systemd is picking up the existence of the newly created device directly and immediately going to work on it, then it is systemd that would need to be made aware of the fact that the device is not yet ready for use.  So, not sure where this belongs, I'm just sure it's a race condition on the newly created device.

Comment 2 Michal Schmidt 2013-02-19 16:08:38 UTC
systemd ships udev rules (in /lib/udev/rules.d/99-systemd.rules) that are meant to delay the moment when systemd sees the device as ready:

# Ignore raid devices that are not yet assembled and started
SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", KERNEL=="md*", TEST!="md/array_state", ENV{SYSTEMD_READY}="0"
SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", KERNEL=="md*", ATTR{md/array_state}=="|clear|inactive", ENV{SYSTEMD_READY}="0"


Is there any other way that full readiness of an md array can be detected from udev rules?

Or would it be possible for mdadm to release the exclusive open before causing the final change of the array_state attribute?

Or would it be possible for the kernel to flip the attribute only after the process that holds the exclusive open closes it?

Comment 3 Doug Ledford 2013-02-19 16:56:21 UTC
These rules are conflating two separate items as being the same thing.  The moment that the array is ready is one thing, whether or not a process has an exclusive open on the array is another.  They are orthogonal.  It may be that mdadm has the exclusive open, but running fsck on the array also holds it open exclusively, as does mounting the array.

So, is there another way to tell if the array is fully ready?  No, this test is good.  It just isn't testing the right thing in this case.

Would it be possible for mdadm to release the lock early?  No, not without creating new race conditions (multiple mdadm instances spawned by udev for multiple constituent devices would cause us to race on which device actually triggers the array start as well as a few other things).

Would it be possible to flip the attributes on close of the device file?  Maybe, would need upstream buy in to do that.

It might be easier to modify mdadm so that any time we are doing incremental assembly we open the md device file, we create a temporary lock file named after the md device file as seen by the kernel (aka, if we are creating /dev/md/home, it will still be /dev/md127 in the kernel, so create a /dev/md127.lock or maybe /dev/md127.lock.$PID), only after we have the lock file do we do the manipulation and start of the array, then when we are done we first close the md device file, then we close/rm the md device lock file, and you add a test to your udev rule above that spins for as long as there is a $DEVNAME.lock* file present.  This wouldn't require kernel changes, so might be a bit easier to get upstream buy in than the kernel modification.  Myself though, I think the kernel modification to flip the ready status on close would be a better solution.

Comment 4 Harald Hoyer 2013-02-26 13:35:14 UTC
Why not have a sys ATTR{}, which flips, when mdadm has done it's job and a "change" uevent is emitted.

Comment 5 Tony Marchese 2013-02-26 13:45:13 UTC
In meantime my workaround for this issue was to create /etc/rc.d/rc.local with the following content:

#!/bin/sh
############# mounting /dev/md126 until bug 912735 is not solved
/bin/mount -a

Comment 6 Doug Ledford 2013-02-27 17:17:02 UTC
You know, in hindsight, I want to rethink my position on how this should be solved.

The change from SysV init to udev + systemd has changed a lot.  One of the primary changes is that it took what used to be a serialized startup and made it parallel.  OK, I'm fine with that.  But with the change from serialized to parallel, you need to have proper locking around certain events.  This is to be expected.

But mdadm is already *doing* the proper locking.  It's doing the same locking as the kernel does when it takes a device and mounts it, or when it takes a device and adds it to another virtual device.  The exclusive holding open of a device, whether by a user space program or by the kernel itself, is the authoritative locking around a device.  Everything else is secondary.

So the udev test is fine in that it tests that the md array has been brought up live (something you don't have to worry about on real devices, but is common to all virtual devices).

It does nothing to test if it is available for use.  And in truth, udev *shouldn't* be testing for that.  The proper test for whether or not the device is available for use is to spin on attempting to open the file until either the file is opened, or a timeout passes.  And it should be systemd that does this, not the kernel and not mdadm.

We've been thinking about this from a boot perspective, and in that instance I can sort of see where systemd might want to have the kernel or mdadm fix this issue.  But this isn't a kernel or mdadm issue, it's a parallel startup locking issue.  The parallel startup locking is handled by systemd.  It's what added the parallel bootup, it's where all the other parallel bootup locking is done, so it's where this locking should be too.

For non-boot scenarios, it would be entirely valid for a program that wants to create an md device to do so itself (without the use of mdadm, think some of the gnome disk utility programs, or anaconda) and then to immediately transition to using the device exclusively.  There is nothing preventing this.  So, the idea of flipping state on device close means that such a program would have to open the device, create the md array, close the device, reopen the device, use the array.  The clunkyness of such a usage scenario (despite it being contrived in the sense that no one actually does this) points out the fact that delaying state transition to close is a hack for this problem, not the right fix.

The proper fix here is for systemd to attempt to open a device before it attempts to call fsck/mount on the device.  If you do the open in a thread/process (presumably the same thread/process that you spawned/forked for the fsck/mount operations), then it doesn't interfere with the rest of systemd's operation and you can do something simple like:

    alarm(5); /* give 5 seconds for the device to become usable */
    fd = open(*device_path, O_EXCL);
    if (fd == -1 && errno == EINTR)
        /* We timed out, the device isn't available for use */
        return <whatever>;
    if (fd == -1) {
        /* Non-timeout error, make a note of it */
        perror("open");
        return <whatever>;
    }
    close(fd);
    /* Proceed with fsck and possible mount */

This has the advantage of being generic, applicaple to all virtual devices (and real devices too), it provides a bit of insulation against udev triggered access races in that we will wait for 5 seconds in the case some other udev triggered program beat us to the device, and it's simple.  So, in my opinion, this bug needs to be switched over to systemd and the fix put in place there.

Comment 7 Fedora End Of Life 2013-12-21 11:32:12 UTC
This message is a reminder that Fedora 18 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 18. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '18'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 18's end of life.

Thank you for reporting this issue and we are sorry that we may not be 
able to fix it before Fedora 18 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior to Fedora 18's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 8 Fedora End Of Life 2014-02-05 19:18:31 UTC
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 9 Zbigniew Jędrzejewski-Szmek 2015-03-20 00:26:01 UTC
(In reply to Sebastian Weigand from comment #0)
> Hi folks!
> 
> Not much to add here, except that this bug seems to persist in Fedora 21.
> Essentially, if one installs Fedora 21 onto a system which uses software
> RAID (in my case IMSM), it will fail to reboot after entries are created in
> /etc/fstab, as the device is in use prior to fsck running.
> 
> I'd really love to get this RAID stuff working, as everyone seems to be
> having issues with it. Ubuntu won't assemble the array per fakeraid
> confusion, and Arch has this identical bug. I'm hoping the wonderful Fedora
> / Red Hat team will come through!
> 
> Cheers,
> 
> -Sebastian Weigand

Comment 10 Zbigniew Jędrzejewski-Szmek 2015-03-20 00:26:51 UTC
*** Bug 1203049 has been marked as a duplicate of this bug. ***

Comment 11 Harald Hoyer 2015-04-16 09:31:15 UTC
(In reply to Doug Ledford from comment #6)
> So, in
> my opinion, this bug needs to be switched over to systemd and the fix put in
> place there.

Comment 12 Lennart Poettering 2015-06-17 23:50:19 UTC
We will not do raid assembly timeouts in systemd for degradation. That needs to take place in mdadm, and be configurable in mdadm.

We are happy to move the remaining md rules from udev into some md package, and some component of md should implement the timeout and retrigger the device if it gives up and wants to proceed in degraded mode. It should be solely md's choice when to set SYSTEMD_READY=0 and when to drop it on the udev device.

Comment 13 Jan Kurik 2015-07-15 14:51:10 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 23 development cycle.
Changing version to '23'.

(As we did not run this process for some time, it could affect also pre-Fedora 23 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 23 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora23

Comment 14 Alessandro Selli 2015-07-19 22:08:55 UTC
Created attachment 1053687 [details]
dracut log /run/initramfs/rdsosreport

Comment 15 Alessandro Selli 2015-07-19 22:10:25 UTC
Could this bug have to do with the issue I'm having on a F21 updated to F22?
At every boot, dracut times out and i am dropped at it's shell.
There, I must run:
mdadm -A md0
mdadm -A md1
...
and so on for each metadevice. Then, I can close it's shell and cryptsetup picks up and asks the filesystem's passphrase. Boot then goes on normally.
I rebuilt the initramfs image several times, to no avail.
I'm enclosing dracut's /run/initramfs/rdsosreport log file. I was genereated on a custom 4.0.6-300.niraya0.fc22.i686 ekrnel, but the same behaviour occures on distribution kernels 4.0.7-300.fc22.i686 and 4.0.8-300.fc22.i686.

Comment 16 Simone Caronni 2015-12-09 18:58:56 UTC
Happens here as well on Fedora 23, but not systematically. It seems like some sort of race condition.

Due to the /home mount point being on an imsm raid5 md device, sometimes the boot fails and I'm asked the password for recovery.

Activating the mdadm device as in comment #15 and then pressing Ctrl+D lets the system finish booting. Otherwise another reboot fixes it.

Comment 17 Jeremy Rimpo 2016-01-11 17:55:09 UTC
*** Bug 1281535 has been marked as a duplicate of this bug. ***

Comment 18 Jeremy Rimpo 2016-01-11 18:01:45 UTC
The symptoms are close enough, I think this is the same bug - though it my case it only started after installing the mdadm version 3.3.4.

If I downgrade and rebuild the kernel initramfs, I'm again able to boot properly.

Comment 19 XiaoNi 2016-01-13 08:44:42 UTC
(In reply to Simone Caronni from comment #16)
> Happens here as well on Fedora 23, but not systematically. It seems like
> some sort of race condition.
> 
> Due to the /home mount point being on an imsm raid5 md device, sometimes the
> boot fails and I'm asked the password for recovery.
> 
> Activating the mdadm device as in comment #15 and then pressing Ctrl+D lets
> the system finish booting. Otherwise another reboot fixes it.

Hi Simone

Could you give the steps one by one? I tried with F23 and didn't reproduce this.

My test environment:
There are 3 disks in the machine. I and installed F23 in one disk.

My steps:
1. Create one raid1 with 2 disks in configuration utility(Intel(R) Rapid Storage Technology) 
2. Install F23 in another disk
3. Start the system
4. mkfs.ext4 /dev/md126
5. mount /dev/md125 /home
6. write the /etc/fstab
7. reboot

After the system started, I can see /dev/md126 is already mounted. Are there any different steps that I did?

Thanks
Xiao

Comment 20 Simone Caronni 2016-01-16 09:29:37 UTC
Hi Xiao,

this morning it happened again and I think it figured out what is the cause of it.

> After the system started, I can see /dev/md126 is already mounted. Are there
> any different steps that I did?

When booting the system, if I'm eager to have it booted, and I press quickly enter at the grub loading screen, it seems that the controller is not initialized properly and so the md device containing the metadata is not found when trying to assemble the actual array.

Attached is a screenshot of what happens if I press quickly enter at the grub screen. I'm now able to reproduce it consistently; I did it already a few times today. I know it sounds crazy, but that's it...

If I let the grub timeout pass, I don't have the issue.

Comment 21 Simone Caronni 2016-01-16 09:30:31 UTC
Created attachment 1115387 [details]
boot error

Comment 22 XiaoNi 2016-01-18 12:06:08 UTC
(In reply to Simone Caronni from comment #21)
> Created attachment 1115387 [details]
> boot error

Hmm, I see the device that is mounted at /home is a device mapper device. Can you try just with raid1?

When I boot the system, the first screen picture is raid information. I can go into the configuration utility(Intel(R) Rapid Storage Technology) with Ctrl+I

Then I pressed Enter all time, so I even can't see the grub loading screen. The system started and the /dev/md126 is mounted at /home.

Thanks
Xiao

Comment 23 Jes Sorensen 2016-01-26 14:32:31 UTC
Simone,

The screenshot doesn't really show anything. If you want us to look at the logs
please provide the full system logs.

That said, if a grub delay makes such a huge difference, then it sounds like a
problem with the BIOS firmware or the SATA driver not spinning up the drives
in time, rather than a problem with the RAID code.

Jes

Comment 24 XiaoNi 2016-06-26 02:13:05 UTC
Hi Simone

Can you reproduce this problem now? If you can reproduce this, which kind of disks do you use? (SSD,NVME, or normal SATA disks ...), what's the test steps?

And as Jes said, you need to give the full system logs too.

Thanks
Xiao

Comment 25 jiri vanek 2016-06-27 10:55:55 UTC
Hello, please don't close.  I inherited the machines with this setup, and they both suffer the issue.

Aprox every 5th reboot the system deadlock during boot wiht race condition.

Comment 26 XiaoNi 2016-06-27 11:48:35 UTC
(In reply to jiri vanek from comment #25)
> Hello, please don't close.  I inherited the machines with this setup, and
> they both suffer the issue.
> 
> Aprox every 5th reboot the system deadlock during boot wiht race condition.

Hi Jiri

Can I use the machine that you inherited? And can you give the steps one by one in detail?

Thanks
Xiao

Comment 27 jiri vanek 2016-06-27 11:55:24 UTC
Ican not give you better info then stas provided in 1330497

By "use" what do you mean? Can you ping me on #java?

Comment 28 XiaoNi 2016-06-27 14:17:07 UTC
(In reply to jiri vanek from comment #27)
> Ican not give you better info then stas provided in 1330497
> 
> By "use" what do you mean? Can you ping me on #java?

"use" mean that can I do the test on the machine which you inherited. What's your irc name? I'm in #java now.

Xiao

Comment 29 XiaoNi 2016-06-29 04:52:34 UTC
I checked the /var/log/messages in my machine after rebooting. There are some messages from systemd that didn't exist in comment0

systemd: Created slice system-mdmon.slice.
systemd: Starting system-mdmon.slice
systemd: Starting MD Metadata Monitor on /dev/md127
systemd: Started MD Metadata Monitor on /dev/md127

.....


systemd: Found device /dev/md126

....

systemd-fsck: /dev/md126: clean, ....

Comment 30 XiaoNi 2016-06-29 04:56:27 UTC
(In reply to XiaoNi from comment #29)
> I checked the /var/log/messages in my machine after rebooting. There are
> some messages from systemd that didn't exist in comment0
> 
> systemd: Created slice system-mdmon.slice.
> systemd: Starting system-mdmon.slice
> systemd: Starting MD Metadata Monitor on /dev/md127
> systemd: Started MD Metadata Monitor on /dev/md127
> 
> .....
> 
> 
> systemd: Found device /dev/md126
> 
> ....

Sorry, I missed one log here

systemd: Starting File System Check on /dev/md126...

> 
> systemd-fsck: /dev/md126: clean, ....

Comment 31 Fedora End Of Life 2016-11-24 10:56:29 UTC
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '23'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 23 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 32 Fedora End Of Life 2016-12-20 12:35:30 UTC
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.