Bug 1290494

Summary: Degraded RAID1 MD Array becomes inactive after rebooting the system.
Product: Red Hat Enterprise Linux 7 Reporter: Nanda Kishore Chinnaram <nanda_kishore_chinna>
Component: mdadmAssignee: XiaoNi <xni>
Status: CLOSED ERRATA QA Contact: Zhang Yi <yizhan>
Severity: high Docs Contact: Milan Navratil <mnavrati>
Priority: high    
Version: 7.2CC: dledford, harald, Jes.Sorensen, jshortt, kasmith, Lakshmi_Narayanan_Du, linux-bugs, mnavrati, nanda_kishore_chinna, narendra_k, nkshirsa, prabhakar_pujeri, rmadhuso, sreekanth_reddy, ssundarr, xni, yizhan
Target Milestone: rc   
Target Release: 7.3   
Hardware: x86_64   
OS: Linux   
Whiteboard: dell_server
Fixed In Version: mdadm-3.4-3.el7 Doc Type: Bug Fix
Doc Text:
A degraded RAID1 array created with *mdadm* is no longer shown as inactive after rebooting Previously, a degraded RAID1 array that was created using the *mdadm* utility could be shown as an inactive RAID0 array after rebooting the system. With this update, the array is started correctly after the system is rebooted.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-04 00:07:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1274397, 1304407, 1313485, 1364088    
Attachments:
Description Flags
SOS report
none
The patch file none

Description Nanda Kishore Chinnaram 2015-12-10 16:49:44 UTC
Created attachment 1104411 [details]
SOS report

Description of problem:
Degraded RAID-1 Array that was created using mdadm becomes inactive and its status is shown as raid0 after the system is rebooted.

Version-Release number of selected component (if applicable):
3.3.2-7

How reproducible:
Always

Steps to Reproduce:
1. Create RAID-1 Array.
mdadm -C /dev/md1 --metadata=1.2 -l1 -n2 /dev/sdb1 /dev/sdc1

2. Save the Configuration details.
mdadm --examine --scan > /etc/mdadm.conf

3. Degrade the Array by unplugging one of the Drives of Array.

4. Status of the Array is shown as Degraded.
mdadm -D /dev/md1

5. Reboot the system

Actual results:
The status of the Array should be shown as Degraded and active.

Expected results:
The status of the Array is shown as inactive and in raid0 mode.

Additional info:
After the system  is rebooted, if the Array is stopped and re-assembled, then status is shown as degraded and active.

Comment 1 Nanda Kishore Chinnaram 2015-12-21 06:50:46 UTC
Correcting the Actual and Expected Results.

* Expected results:
The status of the Array should be shown as Degraded, Raid1 and active.

* Actual results:
The status of the Array is shown as inactive and in raid0 mode.

Comment 6 Lakshmi_Narayanan_Du 2016-01-06 11:41:01 UTC
Following is my analysis
 
 # when one of the drive of RAID 1 array is unplugged the array is 
hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
and sets "read-auto" mode in 
/sys/devices/virtual/block/md9/md/array_state

# Seems we may need to "run" the array to make it active again  

[Optional]
# one manual way I could see is after degrade before reboot run "mdadm - /dev/md9" and then reboot it works fine .But this cannot be a automatic solution 

# But since we are rebooting without the above step the state  "/sys/devices/virtual/block/md9/md/array_state"
  remains "read-auto"

# While the system reboots it marks this "inactive" and also doesnot print the valid values from the sys path 

# So During/At reboot time hooking the "run" is making it active

"mdadm --R /dev/md9"


I could see a dracut patch mdraid_start that makes mandatory "run" over all the array's

https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/mdraid_start.sh

To my understanding defining a rule in Udev rules.d might help 

Working on to understand it further

Thanks
Lakshmi

Comment 7 XiaoNi 2016-01-06 13:53:33 UTC
Created attachment 1112184 [details]
The patch file

Hi Lakshmi

There is already a patch for this. I tried it and it can fix the problem. The attachment is the patch.

Thanks
Xiao

Comment 8 Lakshmi_Narayanan_Du 2016-01-09 10:46:30 UTC
Sure Xiao .Thanks for the update 

Regards
Lakshmi

Comment 10 Harald Hoyer 2016-04-13 13:34:09 UTC
(In reply to Lakshmi_Narayanan_Du from comment #6)
> Following is my analysis
>  
>  # when one of the drive of RAID 1 array is unplugged the array is 
> hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
> and sets "read-auto" mode in 
> /sys/devices/virtual/block/md9/md/array_state
> 
> # Seems we may need to "run" the array to make it active again  
> 
> [Optional]
> # one manual way I could see is after degrade before reboot run "mdadm -
> /dev/md9" and then reboot it works fine .But this cannot be a automatic
> solution 
> 
> # But since we are rebooting without the above step the state 
> "/sys/devices/virtual/block/md9/md/array_state"
>   remains "read-auto"
> 
> # While the system reboots it marks this "inactive" and also doesnot print
> the valid values from the sys path 
> 
> # So During/At reboot time hooking the "run" is making it active
> 
> "mdadm --R /dev/md9"
> 
> 
> I could see a dracut patch mdraid_start that makes mandatory "run" over all
> the array's
> 
> https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/
> mdraid_start.sh

Huh? Why are you using the "zfsonlinux" github clone?

> 
> To my understanding defining a rule in Udev rules.d might help 
> 
> Working on to understand it further
> 
> Thanks
> Lakshmi

Might be better in the md-shutdown script:
<https://github.com/dracutdevs/dracut/blob/master/modules.d/90mdraid/md-shutdown.sh>

Comment 11 Harald Hoyer 2016-04-13 13:37:33 UTC
(In reply to Harald Hoyer from comment #10)
> (In reply to Lakshmi_Narayanan_Du from comment #6)
> > Following is my analysis
> >  
> >  # when one of the drive of RAID 1 array is unplugged the array is 
> > hooking "static int restart_array(struct mddev *mddev)" [driver/md/md.c]
> > and sets "read-auto" mode in 
> > /sys/devices/virtual/block/md9/md/array_state
> > 
> > # Seems we may need to "run" the array to make it active again  
> > 
> > [Optional]
> > # one manual way I could see is after degrade before reboot run "mdadm -
> > /dev/md9" and then reboot it works fine .But this cannot be a automatic
> > solution 
> > 
> > # But since we are rebooting without the above step the state 
> > "/sys/devices/virtual/block/md9/md/array_state"
> >   remains "read-auto"
> > 
> > # While the system reboots it marks this "inactive" and also doesnot print
> > the valid values from the sys path 
> > 
> > # So During/At reboot time hooking the "run" is making it active
> > 
> > "mdadm --R /dev/md9"
> > 
> > 
> > I could see a dracut patch mdraid_start that makes mandatory "run" over all
> > the array's
> > 
> > https://github.com/zfsonlinux/dracut/blob/master/modules.d/90mdraid/
> > mdraid_start.sh
> 
> Huh? Why are you using the "zfsonlinux" github clone?
> 
> > 
> > To my understanding defining a rule in Udev rules.d might help 
> > 
> > Working on to understand it further
> > 
> > Thanks
> > Lakshmi
> 
> Might be better in the md-shutdown script:
> <https://github.com/dracutdevs/dracut/blob/master/modules.d/90mdraid/md-
> shutdown.sh>

Although this is probably only interesting for the root on MD case.

Comment 14 Nanda Kishore Chinnaram 2016-08-09 22:13:21 UTC
Verified the issue in RHEL 7.3 Alpha1 Build. It's resolved.

Comment 16 Nanda Kishore Chinnaram 2016-09-14 06:53:03 UTC
Hi nikhil, what information do you need ?

Comment 20 errata-xmlrpc 2016-11-04 00:07:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2182.html

Comment 22 Red Hat Bugzilla 2023-09-14 03:14:36 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days