Description of problem: F18-TC1 QA:Test Case Partitioning on Software Raid Version-Release number of selected component: anaconda-18.36 Additional info: libreport version: 2.0.18 cmdline: /usr/bin/python /sbin/anaconda kernel: 3.6.9-4.fc18.i686 description: :The following was filed automatically by anaconda: :anaconda 18.36 exception report :Traceback (most recent call first): : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/devicelibs/mdraid.py", line 263, in md_node_from_name : raise MDRaidError("md_node_from_name failed: %s" % e) : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/devices.py", line 3070, in _postCreate : md_node = mdraid.md_node_from_name(self.name) : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/devices.py", line 789, in create : self._postCreate() : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/deviceaction.py", line 241, in execute : self.device.create() : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/devicetree.py", line 323, in processActions : action.execute() : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/__init__.py", line 336, in doIt : self.devicetree.processActions() : File "/usr/lib/python2.7/site-packages/pyanaconda/storage/__init__.py", line 174, in turnOnFilesystems : storage.doIt() : File "/usr/lib/python2.7/site-packages/pyanaconda/install.py", line 114, in doInstall : turnOnFilesystems(storage) : File "/usr/lib/python2.7/threading.py", line 504, in run : self.__target(*self.__args, **self.__kwargs) : File "/usr/lib/python2.7/site-packages/pyanaconda/threads.py", line 91, in run : threading.Thread.run(self, *args, **kwargs) :MDRaidError: md_node_from_name failed: [Errno 2] No such file or directory: '/dev/md/root'
Created attachment 661888 [details] File: anaconda-tb
Created attachment 661889 [details] File: product
Created attachment 661890 [details] File: type
Created attachment 661891 [details] File: storage.log
Created attachment 661892 [details] File: version
Created attachment 661893 [details] File: environ
Created attachment 661894 [details] File: executable
Created attachment 661895 [details] File: anaconda.log
Created attachment 661896 [details] File: syslog
Created attachment 661897 [details] File: hashmarkername
Created attachment 661898 [details] File: packaging.log
Created attachment 661899 [details] File: cmdline_file
Created attachment 661900 [details] File: release
Created attachment 661901 [details] File: program.log
F18-TC1 QA Test Case Software Raid Package: anaconda-18.36 Architecture: x86_64 OS Release: Fedora release 18-TC1
Failures above were with two disks and RAID10 specified, testing with RAID0 works with 2 disks.
Additional Tests with 4 disks and Raid0 and Raid4 Parity succeed while Raid10 redundancy failed.
FWIW I just successfully installed to a two-disk raid10 root filesystem created by anaconda with smoke6 in qemu/kvm.
Bob, can you provide any more details about your configuration here?
I tested this in VM on Centos 6.3 Host. First test was conducted with two 12.0 GB Virtio Drives. And then repeated with 4 12.0 GB virtio drives. /boot was 500mb and swap was 2gb. I know very little about raid and used the defaults in most cases. I can repeat test in a day or so with TC3 once disos rebuild.
Created attachment 665351 [details] F18-TC1-386 raid10 config screen which failed I did have a snapshot of the raid config screen which failed.
Discussed at 2012-12-21 blocker review meeting: http://meetbot.fedoraproject.org/fedora-bugzappers/2012-12-21/f18final-blocker-review-7.2012-12-21-18.33.log.txt . David says he's done many installs without hitting this and no-one else seems to have reproduced, so for now it's rejected as a blocker on the basis that it seems to be an oddball result. But it can be re-proposed if we can find a reliable reproducer. I note that Bob is using 32-bit, David was testing with 64-bit. I will try and reproduce Bob's case as closely as I can, with 32-bit, and see if I can trigger the bug.
2 disk raid test Package: anaconda-18.37.3 OS Release: Fedora release 18-TC3
Created attachment 667471 [details] F18-TC3 Choose Storage
Created attachment 667484 [details] F18-TC3 Choose Storage 2
Created attachment 667487 [details] F18-TC3 Root Partition
Created attachment 667488 [details] F18-TC3 Swap Partition
Created attachment 667489 [details] F18-TC3 Raid10 / partition
Created attachment 667490 [details] F18-TC3 Boot Partition
Just repeated this test. Used as Host System Centos 6.3 kernel 2.6.32-279.19.1.el6.x86_64. Created a vm with two 12 gb virtio disks. The partitoning layout is in the several attached screenshots.
I just repeated this test on my system using TC3-x86_64-DVD.iso and the bug is not encountered. It seems to be relevant to the i386 arch.
I tried to reproduce this with TC3 i386 and still couldn't, it makes it to package install for me. I set the machine up with two disks (15GB and 10GB), emptied both, created partitions 'automatically', modified the / partition to RAID10 and the other two partitions to standard ext4, and started the install: it worked fine. Not sure what you're hitting that I'm not, but at this point I'm not sure this is a blocker, it seems a bit of a corner case. I'm using F18 as both host and guest.
The strange gets stranger. I tried AdamW's procedure with blank disks of 15gb and 10gb and the Raid10 worked. I then tried AdamW's procedure with disks of 12gb and 12gb and got the same error/bug. I then tried with two 15gb disks and it worked. Not sure what is so special about two 12 gb disks? Centos 6.3 qemu/kvm host using virt-manager.
that is crazy odd, but at least means the bug definitely seems to be a small corner case. any takers for 'what's special about 12GB'? Some kind of borderline?
I've not seen any cases of this mentioned other than my test case. Do we need to keep this bug open?
I'll leave it up to the devs, but it'd be interesting to see if you can reproduce with F19. If you can dodge all the other land mines in order to get a valid test, of course :)
I tried a 12gb twin setup raid in vbox for F19 and it worked I need to try this on Centos 6 as that's where the F18 bug showed up.
Tested with Twin Drive Raid in qemu-kvm and this issue is nonm-existent for F19
This message is a reminder that Fedora 18 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 18. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '18'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 18's end of life. Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 18 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior to Fedora 18's end of life. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.