Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1410913 - mdadm --stop not cleaning up devices from sys/dev
Summary: mdadm --stop not cleaning up devices from sys/dev
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: mdadm
Version: 6.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: XiaoNi
QA Contact: guazhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-06 20:21 UTC by John Pittman
Modified: 2020-03-11 15:34 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-06-13 18:35:24 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2850351 0 None None None 2017-01-06 20:45:29 UTC

Description John Pittman 2017-01-06 20:21:15 UTC
Description of problem:

After executing 'mdadm --stop /dev/md(x), md device still exists in /sys/block and /dev.  Issue does not exist in latest RHEL7 levels.

Version-Release number of selected component (if applicable):

kernel-2.6.32-642.el6.x86_64
mdadm-3.3.4-1.el6_8.5.x86_64

How reproducible:

every time

Steps to Reproduce:
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdd1[0] sdd2[1]
      499648 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

[root@localhost ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: <none>

[root@localhost ~]# ls /dev/md*
/dev/md0

/dev/md:
md-device-map

[root@localhost ~]# ls /sys/block/md*
alignment_offset  bdi  capability  dev  discard_alignment  ext_range  holders  inflight  md  power  queue  range  removable  ro  size  slaves  stat  subsystem  trace  uevent

[root@localhost ~]# ls -d /sys/block/md*
/sys/block/md0

Actual results:

After --stop executes successfully, device nodes remain

Expected results:

should remove devices at --stop completion

Additional info:

Manage_stop was identical between the versions I tested, so I am unsure if this issue is within mdadm or the kernel.  I will start it in mdadm; if it needs moving please feel free.

Comment 6 John Pittman 2017-01-06 21:30:15 UTC
Further info:

[root@localhost ~]# lsof | grep md
md/0        28      root  cwd       DIR              253,0    12288          2 /
md/0        28      root  rtd       DIR              253,0    12288          2 /
md/0        28      root  txt   unknown                                        /proc/28/exe
md_misc/0   29      root  cwd       DIR              253,0    12288          2 /
md_misc/0   29      root  rtd       DIR              253,0    12288          2 /
md_misc/0   29      root  txt   unknown                                        /proc/29/exe
ksmd        35      root  cwd       DIR              253,0    12288          2 /
ksmd        35      root  rtd       DIR              253,0    12288          2 /
ksmd        35      root  txt   unknown                                        /proc/35/exe
dmeventd  1071      root  mem       REG              253,0  1491968     148718 /lib64/liblvm2cmd.so.2.02
hald      1564 haldaemon   16r      REG                0,3        0 4026531979 /proc/mdstat

Comment 12 John Pittman 2017-01-09 16:58:18 UTC
XiaoNi, Jes,

I did some further testing at your request.  Below are the results.  The script used for testing is as follows:  http://pastebin.test.redhat.com/444130

A successful stop would be characterized as only showing the date in the block and not showing listings in /dev or /sys/block.

- Standard RHEL 6.8 kernel and inbox mdadm

[root@localhost ~]# uname -r
2.6.32-642.11.1.el6.x86_64

[root@localhost ~]# rpm -qa | grep mdadm
mdadm-3.3.4-1.el6_8.5.x86_64
[root@localhost ~]# 

results:  http://pastebin.test.redhat.com/444046 (Fail)

================================================

- Test kernel provided by XiaoNi and inbox mdadm:

[root@localhost ~]# uname -r
2.6.32-680.el6.test.x86_64

[root@localhost ~]# rpm -qa | grep mdadm
mdadm-3.3.4-1.el6_8.5.x86_64

http://pastebin.test.redhat.com/444058 (Fail)

================================================

- Standard RHEL 7.3 kernel and mdadm

[root@localhost ~]# uname -r
3.10.0-514.2.2.el7.x86_64

[root@localhost ~]# rpm -qa | grep mdadm
mdadm-3.4-14.el7.x86_64

http://pastebin.test.redhat.com/444076 (Pass)

================================================

- Test kernel provided by XiaoNi and patched mdadm:
  - mdadm patch taken from Jes in thread http://www.spinics.net/lists/raid/msg51490.html
  - patch applied:  http://pastebin.test.redhat.com/444141

[root@localhost ~]# uname -r
2.6.32-680.el6.test.x86_64

[root@localhost ~]# rpm -qa | grep mdadm
mdadm-3.3.4-1.el6_8.5.test.x86_64

http://pastebin.test.redhat.com/444142 (Pass)

Comment 17 Chris Williams 2017-06-13 18:35:24 UTC
Red Hat Enterprise Linux 6 transitioned to the Production 3 Phase on May 10, 2017.  During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.
 
The official life cycle policy can be reviewed here:
 
http://redhat.com/rhel/lifecycle
 
This issue does not appear to meet the inclusion criteria for the Production Phase 3 and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification.  Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:
 
https://access.redhat.com


Note You need to log in before you can comment on or make changes to this bug.