Hide Forgot
Description of problem: Testing BZ1320211 found below error output mdadm: failed to read /proc/mdstat while unblocking container Version-Release number of selected component (if applicable): mdadm-3.4-2.el6 2.6.32-634.el6.x86_64 How reproducible: 70% Steps to Reproduce: mdadm -CR /dev/md/imsm0 -e imsm -n3 /dev/sd[bcd] mdadm -CR /dev/md/vol0 -l0 -n2 /dev/sd[bc] cat /proc/mdstat mdadm --wait /dev/md/vol0 mdadm -D /dev/md126 export MDADM_EXPERIMENTAL=1 mdadm -G /dev/md/imsm0 -n3 sleep 0.5 mdadm -Ss Actual results: Expected results: Additional info: This issue also can be reproduced with upstream mdadm + mdadm -CR /dev/md/imsm0 -e imsm -n3 /dev/sdb /dev/sdc /dev/sdd mdadm: /dev/sdb appears to contain an ext2fs file system size=262144K mtime=Mon Nov 9 10:42:05 2015 mdadm: /dev/sdb appears to be part of a raid array: level=container devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: /dev/sdc appears to be part of a raid array: level=container devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: /dev/sdd appears to be part of a raid array: level=container devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: container /dev/md/imsm0 prepared. + mdadm -CR /dev/md/vol0 -l0 -n2 /dev/sdb /dev/sdc mdadm: /dev/sdb appears to contain an ext2fs file system size=262144K mtime=Mon Nov 9 10:42:05 2015 mdadm: /dev/sdb appears to be part of a raid array: level=container devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: partition table exists on /dev/sdb but will be lost or meaningless after creating array mdadm: /dev/sdc appears to be part of a raid array: level=container devices=0 ctime=Thu Jan 1 08:00:00 1970 mdadm: partition table exists on /dev/sdc but will be lost or meaningless after creating array mdadm: Creating array inside imsm container md127 mdadm: array /dev/md/vol0 started. + cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] md126 : active raid0 sdc[1] sdb[0] 1953518592 blocks super external:/md127/0 128k chunks md127 : inactive sdd[2](S) sdc[1](S) sdb[0](S) 3315 blocks super external:imsm unused devices: <none> + mdadm --wait /dev/md/vol0 + mdadm -D /dev/md126 /dev/md126: Container : /dev/md/imsm0, member 0 Raid Level : raid0 Array Size : 1953518592 (1863.02 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 2 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 128K UUID : ecec1252:2e0e4751:bd5f90b5:879242a8 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc + export MDADM_EXPERIMENTAL=1 + MDADM_EXPERIMENTAL=1 + mdadm -G /dev/md/imsm0 -n3 mdadm: multi-array reshape continues in background + sleep 0.5 mdadm: level of /dev/md/vol0 changed to raid4 mdadm: Need to backup 768K of critical section.. + mdadm -Ss [root@dhcp-12-163 ~]# mdadm: failed to read /proc/mdstat while unblocking container
Hello, This error message is printed by mdadm process that is working in the background during reshape. It tries to unfreeze the container which is already stopped (not exist any longer). We did not observe any other unwanted behavior after applying the patches from: BZ1320211. Currently there is no fix in upstream for this. Thanks, Pawel
Per Pawel's comments in #4, I am closing this bug. RHEL6 is in maintenance mode, and there are no other unwanted behaviour. Closing