RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1290494 - Degraded RAID1 MD Array becomes inactive after rebooting the system.
Summary: Degraded RAID1 MD Array becomes inactive after rebooting the system.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: mdadm
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 7.3
Assignee: XiaoNi
QA Contact: Zhang Yi
Milan Navratil
URL:
Whiteboard: dell_server
Depends On:
Blocks: 1274397 1304407 1313485 1364088
TreeView+ depends on / blocked
 
Reported: 2015-12-10 16:49 UTC by Nanda Kishore Chinnaram
Modified: 2023-09-14 03:14 UTC (History)
17 users (show)

Fixed In Version: mdadm-3.4-3.el7
Doc Type: Bug Fix
Doc Text:
A degraded RAID1 array created with *mdadm* is no longer shown as inactive after rebooting Previously, a degraded RAID1 array that was created using the *mdadm* utility could be shown as an inactive RAID0 array after rebooting the system. With this update, the array is started correctly after the system is rebooted.
Clone Of:
Environment:
Last Closed: 2016-11-04 00:07:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
SOS report (6.22 MB, application/x-xz)
2015-12-10 16:49 UTC, Nanda Kishore Chinnaram
no flags Details
The patch file (762 bytes, patch)
2016-01-06 13:53 UTC, XiaoNi
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2252841 0 None None None 2016-04-11 21:37:33 UTC
Red Hat Product Errata RHBA-2016:2182 0 normal SHIPPED_LIVE mdadm bug fix and enhancement update 2016-11-03 13:17:48 UTC

Description Nanda Kishore Chinnaram 2015-12-10 16:49:44 UTC
Created attachment 1104411 [details]
SOS report

Description of problem:
Degraded RAID-1 Array that was created using mdadm becomes inactive and its status is shown as raid0 after the system is rebooted.

Version-Release number of selected component (if applicable):
3.3.2-7

How reproducible:
Always

Steps to Reproduce:
1. Create RAID-1 Array.
mdadm -C /dev/md1 --metadata=1.2 -l1 -n2 /dev/sdb1 /dev/sdc1

2. Save the Configuration details.
mdadm --examine --scan > /etc/mdadm.conf

3. Degrade the Array by unplugging one of the Drives of Array.

4. Status of the Array is shown as Degraded.
mdadm -D /dev/md1

5. Reboot the system

Actual results:
The status of the Array should be shown as Degraded and active.

Expected results:
The status of the Array is shown as inactive and in raid0 mode.

Additional info:
After the system  is rebooted, if the Array is stopped and re-assembled, then status is shown as degraded and active.

Comment 1 Nanda Kishore Chinnaram 2015-12-21 06:50:46 UTC
Correcting the Actual and Expected Results.

* Expected results:
The status of the Array should be shown as Degraded, Raid1 and active.

* Actual results:
The status of the Array is shown as inactive and in raid0 mode.

Comment 6 Lakshmi_Narayanan_Du 2016-01-06 11:41:01 UTC
Following is my analysis
 
 # when one of the drive of RAID 1 array is unplugged the array is 
hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
and sets "read-auto" mode in 
/sys/devices/virtual/block/md9/md/array_state

# Seems we may need to "run" the array to make it active again  

[Optional]
# one manual way I could see is after degrade before reboot run "mdadm - /dev/md9" and then reboot it works fine .But this cannot be a automatic solution 

# But since we are rebooting without the above step the state  "/sys/devices/virtual/block/md9/md/array_state"
  remains "read-auto"

# While the system reboots it marks this "inactive" and also doesnot print the valid values from the sys path 

# So During/At reboot time hooking the "run" is making it active

"mdadm --R /dev/md9"


I could see a dracut patch mdraid_start that makes mandatory "run" over all the array's

https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/mdraid_start.sh

To my understanding defining a rule in Udev rules.d might help 

Working on to understand it further

Thanks
Lakshmi

Comment 7 XiaoNi 2016-01-06 13:53:33 UTC
Created attachment 1112184 [details]
The patch file

Hi Lakshmi

There is already a patch for this. I tried it and it can fix the problem. The attachment is the patch.

Thanks
Xiao

Comment 8 Lakshmi_Narayanan_Du 2016-01-09 10:46:30 UTC
Sure Xiao .Thanks for the update 

Regards
Lakshmi

Comment 10 Harald Hoyer 2016-04-13 13:34:09 UTC
(In reply to Lakshmi_Narayanan_Du from comment #6)
> Following is my analysis
>  
>  # when one of the drive of RAID 1 array is unplugged the array is 
> hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
> and sets "read-auto" mode in 
> /sys/devices/virtual/block/md9/md/array_state
> 
> # Seems we may need to "run" the array to make it active again  
> 
> [Optional]
> # one manual way I could see is after degrade before reboot run "mdadm -
> /dev/md9" and then reboot it works fine .But this cannot be a automatic
> solution 
> 
> # But since we are rebooting without the above step the state 
> "/sys/devices/virtual/block/md9/md/array_state"
>   remains "read-auto"
> 
> # While the system reboots it marks this "inactive" and also doesnot print
> the valid values from the sys path 
> 
> # So During/At reboot time hooking the "run" is making it active
> 
> "mdadm --R /dev/md9"
> 
> 
> I could see a dracut patch mdraid_start that makes mandatory "run" over all
> the array's
> 
> https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/
> mdraid_start.sh

Huh? Why are you using the "zfsonlinux" github clone?

> 
> To my understanding defining a rule in Udev rules.d might help 
> 
> Working on to understand it further
> 
> Thanks
> Lakshmi

Might be better in the md-shutdown script:
<https://github.com/dracutdevs/dracut/blob/master/modules.d/90mdraid/md-shutdown.sh>

Comment 11 Harald Hoyer 2016-04-13 13:37:33 UTC
(In reply to Harald Hoyer from comment #10)
> (In reply to Lakshmi_Narayanan_Du from comment #6)
> > Following is my analysis
> >  
> >  # when one of the drive of RAID 1 array is unplugged the array is 
> > hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
> > and sets "read-auto" mode in 
> > /sys/devices/virtual/block/md9/md/array_state
> > 
> > # Seems we may need to "run" the array to make it active again  
> > 
> > [Optional]
> > # one manual way I could see is after degrade before reboot run "mdadm -
> > /dev/md9" and then reboot it works fine .But this cannot be a automatic
> > solution 
> > 
> > # But since we are rebooting without the above step the state 
> > "/sys/devices/virtual/block/md9/md/array_state"
> >   remains "read-auto"
> > 
> > # While the system reboots it marks this "inactive" and also doesnot print
> > the valid values from the sys path 
> > 
> > # So During/At reboot time hooking the "run" is making it active
> > 
> > "mdadm --R /dev/md9"
> > 
> > 
> > I could see a dracut patch mdraid_start that makes mandatory "run" over all
> > the array's
> > 
> > https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/
> > mdraid_start.sh
> 
> Huh? Why are you using the "zfsonlinux" github clone?
> 
> > 
> > To my understanding defining a rule in Udev rules.d might help 
> > 
> > Working on to understand it further
> > 
> > Thanks
> > Lakshmi
> 
> Might be better in the md-shutdown script:
> <https://github.com/dracutdevs/dracut/blob/master/modules.d/90mdraid/md-
> shutdown.sh>

Although this is probably only interesting for the root on MD case.

Comment 14 Nanda Kishore Chinnaram 2016-08-09 22:13:21 UTC
Verified the issue in RHEL 7.3 Alpha1 Build. It's resolved.

Comment 16 Nanda Kishore Chinnaram 2016-09-14 06:53:03 UTC
Hi nikhil, what information do you need ?

Comment 20 errata-xmlrpc 2016-11-04 00:07:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2182.html

Comment 22 Red Hat Bugzilla 2023-09-14 03:14:36 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.