Bug 1317752 - md0 raid device node cannot be released after --stop operation
Summary: md0 raid device node cannot be released after --stop operation
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: 24
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: XiaoNi
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-15 06:29 UTC by Zhang Yi
Modified: 2017-07-27 08:18 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-27 08:18:51 UTC
Type: Bug


Attachments (Terms of Use)

Description Zhang Yi 2016-03-15 06:29:25 UTC
Description of problem:
md0 raid device node cannot be released after --stop operation

Version-Release number of selected component (if applicable):
mdadm-3.3.4-3.fc24.x86_64
kernel-4.5.0-0.rc7.git0.2.fc24.x86_64


How reproducible:


Steps to Reproduce:
1. # mdadm --create --run /dev/md1 --level 1 --metadata 1.2 --raid-devices 7 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 --spare-devices 1 /dev/loop7 --chunk 512 --bitmap=internal --bitmap-chunk=64M
2. # mdadm -S /dev/md1
mdadm: stopped /dev/md1
3. # ll /dev/md*
brw-rw----. 1 root disk 9, 0 Mar 15 02:23 /dev/md0
brw-rw----. 1 root disk 9, 1 Mar 15 02:21 /dev/md1


Actual results:
 raid device cannot be released

Expected results:
 raid device can be released

Additional info:
[ 2903.164106] md: bind<loop0>
[ 2903.164168] md: bind<loop1>
[ 2903.164227] md: bind<loop2>
[ 2903.164254] md: bind<loop3>
[ 2903.164279] md: bind<loop4>
[ 2903.173150] md: bind<loop5>
[ 2903.173186] md: bind<loop6>
[ 2903.173228] md: bind<loop7>
[ 2903.174363] md/raid1:md1: not clean -- starting background reconstruction
[ 2903.174365] md/raid1:md1: active with 7 out of 7 mirrors
[ 2903.174389] created bitmap (1 pages) for device md1
[ 2903.174440] md1: bitmap initialized from disk: read 1 pages, set 8 of 8 bits
[ 2903.174511] md1: detected capacity change from 0 to 523960320
[ 2903.174534] RAID1 conf printout:
[ 2903.174536]  --- wd:7 rd:7
[ 2903.174538]  disk 0, wo:0, o:1, dev:loop0
[ 2903.174540]  disk 1, wo:0, o:1, dev:loop1
[ 2903.174542]  disk 2, wo:0, o:1, dev:loop2
[ 2903.174544]  disk 3, wo:0, o:1, dev:loop3
[ 2903.174546]  disk 4, wo:0, o:1, dev:loop4
[ 2903.174547]  disk 5, wo:0, o:1, dev:loop5
[ 2903.174549]  disk 6, wo:0, o:1, dev:loop6
[ 2903.174581] md: resync of RAID array md1
[ 2903.174584] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 2903.174585] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 2903.174594] md: using 128k window, over a total of 511680k.
[ 2905.458946] md: md1: resync done.
[ 2905.459902] RAID1 conf printout:
[ 2905.459905]  --- wd:7 rd:7
[ 2905.459907]  disk 0, wo:0, o:1, dev:loop0
[ 2905.459908]  disk 1, wo:0, o:1, dev:loop1
[ 2905.459909]  disk 2, wo:0, o:1, dev:loop2
[ 2905.459911]  disk 3, wo:0, o:1, dev:loop3
[ 2905.459912]  disk 4, wo:0, o:1, dev:loop4
[ 2905.459913]  disk 5, wo:0, o:1, dev:loop5
[ 2905.459914]  disk 6, wo:0, o:1, dev:loop6
[ 2905.460014] RAID1 conf printout:
[ 2905.460015]  --- wd:7 rd:7
[ 2905.460016]  disk 0, wo:0, o:1, dev:loop0
[ 2905.460017]  disk 1, wo:0, o:1, dev:loop1
[ 2905.460018]  disk 2, wo:0, o:1, dev:loop2
[ 2905.460019]  disk 3, wo:0, o:1, dev:loop3
[ 2905.460021]  disk 4, wo:0, o:1, dev:loop4
[ 2905.460022]  disk 5, wo:0, o:1, dev:loop5
[ 2905.460023]  disk 6, wo:0, o:1, dev:loop6
[ 2926.837896] md1: detected capacity change from 523960320 to 0
[ 2926.837907] md: md1 stopped.
[ 2926.837915] md: unbind<loop7>
[ 2926.849005] md: export_rdev(loop7)
[ 2926.867964] md: unbind<loop6>
[ 2926.883033] md: export_rdev(loop6)
[ 2926.888959] md: unbind<loop5>
[ 2926.895003] md: export_rdev(loop5)
[ 2926.898022] md: unbind<loop4>
[ 2926.901014] md: export_rdev(loop4)
[ 2926.904041] md: unbind<loop3>
[ 2926.907033] md: export_rdev(loop3)
[ 2926.910053] md: unbind<loop2>
[ 2926.913032] md: export_rdev(loop2)
[ 2926.916046] md: unbind<loop1>
[ 2926.919031] md: export_rdev(loop1)
[ 2926.922045] md: unbind<loop0>
[ 2926.925033] md: export_rdev(loop0)
[ 3375.747745] md: md1 stopped.

Comment 1 XiaoNi 2016-03-16 10:17:12 UTC
There is a same topic in recent upstream. "Drop sending a change uevent when stopping". I'm looking into this.

Comment 2 Fedora End Of Life 2017-07-25 20:19:44 UTC
This message is a reminder that Fedora 24 is nearing its end of life.
Approximately 2 (two) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 24. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '24'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 24 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.

Comment 3 XiaoNi 2017-07-27 07:16:48 UTC
Hi Yi

We already fix this in f26, you can update f26 to resolve this problem. If you want to still use f24, you can use install the latest mdadm package for rhel. 

Thanks
Xiao

Comment 4 Zhang Yi 2017-07-27 08:18:51 UTC
Verified on fc26, move to CLOSED.

mdadm-4.0-1.fc26.x86_64
4.11.9-300.fc26.x86_64


Note You need to log in before you can comment on or make changes to this bug.