Bug 901518

Summary: raid1 devices occasional degraded on boot for no reason
Product: [Fedora] Fedora Reporter: Tim Waugh <twaugh>
Component: mdadmAssignee: Jes Sorensen <Jes.Sorensen>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 17CC: agk, dledford, harald, Jes.Sorensen, sschaefer
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-08-01 01:08:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
dmesg output
none
messages none

Description Tim Waugh 2013-01-18 11:33:00 UTC
Created attachment 682362 [details]
dmesg output

Description of problem:
Every few boots, I'm seeing several degraded raid1 devices.  A few 'mdadm /dev/md$n --re-add missing' commands correct the problem until it happens again a few boots later.  This has been happening since November last year, and maybe earlier.

Version-Release number of selected component (if applicable):
mdadm-3.2.6-7.fc17.x86_64
kernel-3.6.11-1.fc17.x86_64
dracut-018-105.git20120927.fc17.noarch
systemd-44-23.fc17.x86_64
udev-182-3.fc17.x86_64

How reproducible:
Occasional.

Steps to Reproduce:
1.I have several raid1 devices configured, mostly with 2 members but one has 3 -- this one usually only has 2 of those members present.
  
Additional info:
This reminds me of a race condition that was fixed from several releases back but I can't seem to find the bug ID for it.

Attached is the output of 'dmesg' from a degraded boot, including the messages produced from manually re-adding the missing devices some time later.

Comment 1 Jes Sorensen 2013-01-18 12:11:03 UTC
Hi

Could you please provide /proc/mdstat output from a fully assembled system
as well as dmesg output?

How many arrays do you have?

Thanks,
Jes

Comment 2 Tim Waugh 2013-01-18 12:20:28 UTC
Here's /proc/mdstat from a fully assembled system, except for /dev/md8 for which the external drive is not powered (normal state of affairs).

Personalities : [raid1] 
md5 : active raid1 sda8[2] sdb8[1]
      281531260 blocks super 1.1 [2/2] [UU]
      bitmap: 0/3 pages [0KB], 65536KB chunk

md4 : active raid1 sda6[2] sdb6[1]
      76798908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid1 sda1[2] sdb1[1]
      10238908 blocks super 1.1 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md3 : active raid1 sda5[2] sdb5[1]
      102398908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md8 : active raid1 sda9[2] sdc1[3]
      312569040 blocks super 1.2 [3/2] [UU_]
      bitmap: 3/3 pages [12KB], 65536KB chunk

md124 : active raid1 sda7[0] sdb7[1]
      6143936 blocks [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md2 : active raid1 sda2[2] sdb2[1]
      10238908 blocks super 1.1 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

The dmesg output in comment #0 is from this point (i.e. after I've correctly assembled the degraded arrays).  Would you like dmesg output from a boot which assembles them automatically without problems?

Comment 3 Jes Sorensen 2013-01-18 12:39:28 UTC
I think this is good - is there anything in /var/log/messages or on the
console indicating a timeout or an error during the assembly?

Comment 4 Tim Waugh 2013-01-18 12:54:21 UTC
Created attachment 682398 [details]
messages

Not that I can see.  /var/log/messages from boot attached.

The console just has:

Cannot open font file True
Starting Manage, Install and Generate Color Profiles...        [  OK  ]
Started Manage, Install and Generate Color Profiles.
Starting Daemon for monitoring attached scanners and registering them with colord...  [  OK  ]
Started Daemon for monitoring attached scanners and registering them with colord.

Comment 5 Fedora End Of Life 2013-07-03 22:55:22 UTC
This message is a reminder that Fedora 17 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 17. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '17'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 17's end of life.

Bug Reporter:  Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 17 is end of life. If you 
would still like  to see this bug fixed and are able to reproduce it 
against a later version  of Fedora, you are encouraged  change the 
'version' to a later Fedora version prior to Fedora 17's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 6 Fedora End Of Life 2013-08-01 01:08:28 UTC
Fedora 17 changed to end-of-life (EOL) status on 2013-07-30. Fedora 17 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.