Bug 1713441

Summary: Post upgrade from F29 to F30 mdadm.conf references wrong device
Product: [Fedora] Fedora Reporter: Ivor Durham <ivor.durham>
Component: mdadmAssignee: Jes Sorensen <jes.sorensen>
Status: CLOSED EOL QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 31CC: agk, dledford, jes.sorensen, xni
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-24 16:47:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ivor Durham 2019-05-23 16:44:55 UTC
Description of problem:

After upgrading from F29 to F30 my daily logwatch report began reporting:

mdadm: cannot open /dev/md/pv00: No such file or directory

The system seemed to be operating correctly outside of an issue with the nouveau display driver.

ls /dev/md shows:

clowder:pv00  imsm0

It looks like my RAID device got renamed during the upgrade to include the hostname.

mdadm --details /dev/clowder:pv00 shows the array correctly

/etc/mdadm/map says:

md126 imsm 80443237:4f7e3e04:8b3416fc:24893615 /dev/md/imsm0
md127 1.2 c00dcf0b:cef7f5b7:62729c6c:446b5f2c /dev/md/clowder:pv00

However /etc/mdadm.conf contains:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=0bcf0dc0:b7f5f7ce:6c9c7262:2c5f6b44

It looks like it should say:

ARRAY /dev/md/clowder:pv00 level=raid1 num-devices=2

Version-Release number of selected component (if applicable):

mdadm-4.1-rc2.0.3.fc30.x86_64

How reproducible:

I only upgraded once so cannot determine if this is repeatable.

Steps to Reproduce:

Upgrade from F29 to F30 with a RAID1 array

Actual results:

mdadm.conf:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/pv00 level=raid1 num-devices=2 UUID=0bcf0dc0:b7f5f7ce:6c9c7262:2c5f6b44

Expected results:

mdadm.conf:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/clowder:pv00 level=raid1 num-devices=2 UUID=0bcf0dc0:b7f5f7ce:6c9c7262:2c5f6b44


Additional info:

I don't know the extent of the consequences of this configuration file problem. The system seems to be operating correctly (apart from another post-F30-upgrade bug with the display). Consequently I can't estimate the severity of this bug.

Comment 1 Ivor Durham 2019-05-25 22:06:58 UTC
I've discovered that this problem depends on which kernel is booted. I use the system in question as a server and usually access it over the network. So after the F29->F30 upgrade I hadn't noticed that instead of booting the default kernel, the system was booting the rescue kernel (see Bug #1707621 which is resolved by re-installing grub2 on /dev/sda). If I manually selected the most recent F30 kernel, the RAID device was /dev/md/pv00 as expected and consistent with /etc/mdadm.conf.

However, when the rescue kernel was booted automatically, the RAID device becomes /dev/md/clowder:pv00:

Booting vmlinuz-5.0.16-300.fc30.x86_64 establishes /dev/md/pv00
Booting vmlinuz-0-rescue-3b4d4e3f6de743b2a10dd1bf715f70c4 establishes /dev/md/clowder:pv00

Comment 2 Ben Cotton 2019-08-13 19:18:11 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 31 development cycle.
Changing version to 31.

Comment 3 Ben Cotton 2020-11-03 15:14:26 UTC
This message is a reminder that Fedora 31 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora 31 on 2020-11-24.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
Fedora 'version' of '31'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 31 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 4 Ben Cotton 2020-11-24 16:47:33 UTC
Fedora 31 changed to end-of-life (EOL) status on 2020-11-24. Fedora 31 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.