Description of problem: Can't partition or format md devices, hangs forever. Using fakeraid (because I am dual booting with windows), have a partition for linux and 1 for windows. motherboard is a asus P9X79LE Version-Release number of selected component (if applicable): mdadm-3.2.6-12.fc18.x86_64 kernel-3.7.8-202.fc18.x86_64 How reproducible: 100% Steps to Reproduce: 1. setup fakeraid config in (fake)raid bios 2. parted /dev/md126, or mkfs on it 3. hung! Expected results: Would like to add and format partitions. Additional info: Couple attachments coming with some troubleshooting info
Created attachment 700390 [details] dmesg | grep md
Created attachment 700391 [details] mdadm --examine /dev/md/imsm0
Created attachment 700392 [details] cat /proc/mdstat
Please provide output of /proc/partitions as well I suspect you will be able to run it parted if you run mdmon --takeover --all If you raid device used for your / partition or just for a data partition? Thanks, Jes
Wow, thank you for the insanely fast response! The drive is just a data partition, I will attach the output of the /proc/partitions in case you need it (though I suspect you won't anymore). You are indeed correct, doing a mdmon --takeover --all, caused everything to stop blocking. Did I miss the boat on some documentation? I googled this for quite some time but didn't find anything about needing to do a mdmon. Thank you for your time.
Created attachment 700409 [details] cat /proc/partitions
Shawn, Just lucky timing really, plus this is an ongoing saga that has been causing grief for quite a while. We spent a long time coming up with a solution handling the case of BIOS RAID as the root device, but I guess handling it for a data device didn't get sorted. This is a bug and it needs to get fixed, but I need to figure out how to fix it, without breaking the root device case. Cheers, Jes
*** Bug 911399 has been marked as a duplicate of this bug. ***
*** Bug 916319 has been marked as a duplicate of this bug. ***
Hi Jes, Is there any way we can assist beside providing more information? I'm a kernel developer among other things, and maybe we can assist somehow. Regards, Moti.
Shawn, Are you still seeing this with the latest updates applied? I have been running a number of tests on Fedora 18+19 and I can no longer reproduce this. Thanks, Jes
(In reply to Jes Sorensen from comment #11) > Shawn, > > Are you still seeing this with the latest updates applied? I have been > running > a number of tests on Fedora 18+19 and I can no longer reproduce this. > > Thanks, > Jes Hi Jes, I'm not sure if the problem is completely gone, but still with all the latest updates for Fedora 18, the Intel RAID device is not activated upon boot. I have to do it manually with "sudo mdadm --run /dev/md126". Regards, Moti.
Well, in fact the problem still persists. I experienced it by mistake by activating the RAID array, mounting it (automatically with fstab), but not executing mdmon --takeover --all. When shutting down, the system hangs (the screen shows "a stop job is waiting for unmounting ......", with the proper mount of the RAID array). Sorry, Moti.
Moti, I don't see your system config listed anywhere in this bug. Can you please provide your /etc/mdadm.conf, /proc/mdstat, and output from rpm -q kernel mdadm dracut What is on top of md126 device? LVM? partitions? Thanks, JEs
Created attachment 812099 [details] cat /etc/mdadm.conf
Created attachment 812100 [details] cat /proc/mdstat
Created attachment 812101 [details] sudo rpm -q kernel mdadm dracut Obviously running with the latest kernel (3.10.14-100).
(In reply to Jes Sorensen from comment #14) > Moti, > > I don't see your system config listed anywhere in this bug. Can you please > provide your /etc/mdadm.conf, /proc/mdstat, and output from > rpm -q kernel mdadm dracut > > What is on top of md126 device? LVM? partitions? > > Thanks, > JEs Correct, now they are attached. The cat /proc/mdstat is before activating the array and mounting any partition. In fact, the RAID has only a single partition over the disk. It's an NTFS volume.
Any reason why you are explicitly configuring your IMSM array in mdadm.conf and not use leaving say: AUTO +imsm +1.x -all ? I haven't tried specifying IMSM arrays like this manually, so I am not sure if that would have an impact on how it gets configured. In particular if there is any impact on the order of specifying the container versus the array itself. Could you try commenting out the two ARRAY lines and just enabling the AUTO line? Jes
Yes, just did to check it out and nothing happens (rebooting, the file system is not mounted and the array not activated, have to do that manually). It was due to old experiments a year ago, trying to make the array active (assuming it was some configuration issue). Regards, Moti.
This message is a reminder that Fedora 18 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 18. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '18'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 18's end of life. Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 18 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior to Fedora 18's end of life. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Can we promote this bug to Fedora 19? Since it still persists there. Moti.
Moving to F19 so we don't lose track of this
Hi, I've installed CentOS 6.5 x64 on a system with Intel RAID configured. Some issues to consider: 1) When the RAID (1) array contains both a NTFS and ext4 partitions, the anaconda installed didn't start the array during installation (had to do it manually in shell, and even after that the installer failed loading the partition scheme because of ntfs). 2) When a new array (RAID 1, similar) was created in the "BIOS", completely empty, the installer recognized it properly, allowed to create any partitions scheme and the array is started when the system boots automatically (no manual configuration whatsoever). Even though it's a data partition and not the root/boot filesystem. Maybe it will provide further insight to track this down as the behavior is pretty similar to Fedora. Regards, Moti.
This message is a notice that Fedora 19 is now at end of life. Fedora has stopped maintaining and issuing updates for Fedora 19. It is Fedora's policy to close all bug reports from releases that are no longer maintained. Approximately 4 (four) weeks from now this bug will be closed as EOL if it remains open with a Fedora 'version' of '19'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 19 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
This issue still persists with new Fedora versions. Guess it is worth to keep it open until solved.
This message is a reminder that Fedora 21 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 21. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '21'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 21 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Well, an important notification since the problem still persists. I have recently upgraded to Fedora 22, and now the system completely hangs upon shutdown (if mdadm devices were activated).
Fedora 21 changed to end-of-life (EOL) status on 2015-12-01. Fedora 21 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.