This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 901518 - raid1 devices occasional degraded on boot for no reason
raid1 devices occasional degraded on boot for no reason
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Jes Sorensen
Fedora Extras Quality Assurance
Depends On:
  Show dependency treegraph
Reported: 2013-01-18 06:33 EST by Tim Waugh
Modified: 2013-07-31 21:08 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-07-31 21:08:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:

Attachments (Terms of Use)
dmesg output (81.28 KB, text/plain)
2013-01-18 06:33 EST, Tim Waugh
no flags Details
messages (108.11 KB, text/plain)
2013-01-18 07:54 EST, Tim Waugh
no flags Details

  None (edit)
Description Tim Waugh 2013-01-18 06:33:00 EST
Created attachment 682362 [details]
dmesg output

Description of problem:
Every few boots, I'm seeing several degraded raid1 devices.  A few 'mdadm /dev/md$n --re-add missing' commands correct the problem until it happens again a few boots later.  This has been happening since November last year, and maybe earlier.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.I have several raid1 devices configured, mostly with 2 members but one has 3 -- this one usually only has 2 of those members present.
Additional info:
This reminds me of a race condition that was fixed from several releases back but I can't seem to find the bug ID for it.

Attached is the output of 'dmesg' from a degraded boot, including the messages produced from manually re-adding the missing devices some time later.
Comment 1 Jes Sorensen 2013-01-18 07:11:03 EST

Could you please provide /proc/mdstat output from a fully assembled system
as well as dmesg output?

How many arrays do you have?

Comment 2 Tim Waugh 2013-01-18 07:20:28 EST
Here's /proc/mdstat from a fully assembled system, except for /dev/md8 for which the external drive is not powered (normal state of affairs).

Personalities : [raid1] 
md5 : active raid1 sda8[2] sdb8[1]
      281531260 blocks super 1.1 [2/2] [UU]
      bitmap: 0/3 pages [0KB], 65536KB chunk

md4 : active raid1 sda6[2] sdb6[1]
      76798908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid1 sda1[2] sdb1[1]
      10238908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md3 : active raid1 sda5[2] sdb5[1]
      102398908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md8 : active raid1 sda9[2] sdc1[3]
      312569040 blocks super 1.2 [3/2] [UU_]
      bitmap: 3/3 pages [12KB], 65536KB chunk

md124 : active raid1 sda7[0] sdb7[1]
      6143936 blocks [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sda2[2] sdb2[1]
      10238908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

The dmesg output in comment #0 is from this point (i.e. after I've correctly assembled the degraded arrays).  Would you like dmesg output from a boot which assembles them automatically without problems?
Comment 3 Jes Sorensen 2013-01-18 07:39:28 EST
I think this is good - is there anything in /var/log/messages or on the
console indicating a timeout or an error during the assembly?
Comment 4 Tim Waugh 2013-01-18 07:54:21 EST
Created attachment 682398 [details]

Not that I can see.  /var/log/messages from boot attached.

The console just has:

Cannot open font file True
Starting Manage, Install and Generate Color Profiles...        [  OK  ]
Started Manage, Install and Generate Color Profiles.
Starting Daemon for monitoring attached scanners and registering them with colord...  [  OK  ]
Started Daemon for monitoring attached scanners and registering them with colord.
Comment 5 Fedora End Of Life 2013-07-03 18:55:22 EDT
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '17'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 17's end of life.

Bug Reporter:  Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 17 is end of life. If you 
would still like  to see this bug fixed and are able to reproduce it 
against a later version  of Fedora, you are encouraged  change the 
'version' to a later Fedora version prior to Fedora 17's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.
Comment 6 Fedora End Of Life 2013-07-31 21:08:28 EDT
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.