Description of problem: installing workstation build 20141116 install to disk error thrown at, or slightly before, language selection screen Version-Release number of selected component: anaconda-core-21.48.14-1.fc21.x86_64 The following was filed automatically by anaconda: anaconda 21.48.14-1 exception report Traceback (most recent call first): File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 944, in addUdevPartitionDevice raise DeviceTreeError("failed to scan disk %s" % disk.name) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 771, in addUdevDMDevice return self.addUdevPartitionDevice(info, disk=disk) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 1183, in addUdevDevice device = self.addUdevDMDevice(info) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2170, in _populate self.addUdevDevice(dev) File "/usr/lib/python2.7/site-packages/blivet/devicetree.py", line 2105, in populate self._populate() File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 480, in reset self.devicetree.populate(cleanupOnly=cleanupOnly) File "/usr/lib/python2.7/site-packages/blivet/__init__.py", line 183, in storageInitialize storage.reset() File "/usr/lib64/python2.7/threading.py", line 766, in run self.__target(*self.__args, **self.__kwargs) File "/usr/lib64/python2.7/site-packages/pyanaconda/threads.py", line 227, in run threading.Thread.run(self, *args, **kwargs) DeviceTreeError: failed to scan disk pdc_bdabcifbfi Additional info: cmdline: /usr/bin/python /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base cmdline_file: BOOT_IMAGE=vmlinuz0 initrd=initrd0.img root=live:CDLABEL=Fedora-Live-Workstation-x86_64-2 rootfstype=auto ro rd.live.image quiet rhgb rd.luks=0 rd.md=0 rd.dm=0 executable: /sbin/anaconda hashmarkername: anaconda kernel: 3.17.2-300.fc21.x86_64 other involved packages: python-blivet-0.61.9-1.fc21.noarch, python-libs-2.7.8-7.fc21.x86_64 product: Fedora" release: Fedora release 21 (Twenty One) type: anaconda version: Fedora
Created attachment 958009 [details] File: anaconda-tb
Created attachment 958010 [details] File: anaconda.log
Created attachment 958011 [details] File: environ
Created attachment 958012 [details] File: journalctl
Created attachment 958013 [details] File: lsblk_output
Created attachment 958014 [details] File: nmcli_dev_list
Created attachment 958015 [details] File: os_info
Created attachment 958016 [details] File: program.log
Created attachment 958017 [details] File: storage.log
Created attachment 958018 [details] File: ifcfg.log
see bug id #1158968 : https://bugzilla.redhat.com/show_bug.cgi?id=1158968 if I disconnect drives this problem does not occur. It seems like an enumeration problem where size is the source. I suppose it could be the type of one of the drives or partitions. I need guidance as to how to verify. Both Mate i686 and Workstation x64 fail in enumerating drives. Builds from today (Fedora-Live-Workstation-x86_64-21-20141116.iso, Fedora-Live-MATE_Compiz-i686-21-20141116.iso)
Created attachment 958021 [details] system drives and partitions
I made sdc RAID READY and disconnected the other drives. it failed in the same way. So it's not about size of enumeration.
To me this looks like HW or dmraid issue: Nov 16 10:29:31 localhost kernel: device-mapper: ioctl: device doesn't appear to be in the dev hash table. Nov 16 10:29:31 localhost kernel: sdb: sdb1 sdb2 sdb3 < sdb5 sdb6 > sdb4 Nov 16 10:29:32 localhost fedora-dmraid-activation[729]: device-mapper: resume ioctl on pdc_deehfcfgfb6 failed: Invalid argument Nov 16 10:29:32 localhost kernel: device-mapper: table: 253:9: dm-3 too small for target: start=2281936896, len=648339456, dev_size=2930146048 Nov 16 10:29:32 localhost fedora-dmraid-activation[729]: create/reload failed on pdc_deehfcfgfb6 Nov 16 10:29:32 localhost kernel: kvm: Nested Virtualization enabled Nov 16 10:29:32 localhost kernel: kvm: Nested Paging enabled Nov 16 10:29:33 localhost kernel: device-mapper: ioctl: device doesn't appear to be in the dev hash table. Nov 16 10:29:34 localhost kernel: device-mapper: table: 253:17: dm-9 too small for target: start=2844411904, len=85864448, dev_size=2930146048 Nov 16 10:29:34 localhost fedora-dmraid-activation[729]: device-mapper: resume ioctl on pdc_bdabcifbfi9 failed: Invalid argument Nov 16 10:29:34 localhost fedora-dmraid-activation[729]: create/reload failed on pdc_bdabcifbfi9 Nov 16 10:29:34 localhost kernel: sda: sda1 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 sda11 sda12 sda13 sda14 sda15 sda16 sda17 >
You have some corruption or stale metadata on your drive(s): 10:34:25,260 INFO blivet: no usable disklabel on pdc_bdabcifbfi Clean that up before starting an installation.
not sure this is possible until I try it but do you mean give the drive a new partition table, make it RAID READY, verify if fails?
not until I added partitions to the empty RAID READY drive. I think it's the extended partition. see also bug id #1165292 : https://bugzilla.redhat.com/show_bug.cgi?id=1165292
The basic issue is that parted is rejecting the partition table as invalid while there are partitions the kernel has recognized and created devices nodes for. You just have to figure out what the craziness is and fix it. Actually fix it. No dd'ing 512 bytes to the beginning of the disk. Figure out what is supposed to be there and what is not, and make it so. If you have to zero the whole drive, fine. I don't care how you do it, but your system isn't usable as it stands now.
*** Bug 1165292 has been marked as a duplicate of this bug. ***
I didn't use dd - I used the raid management of the bios to change the drive to be raid ready. I used gparted to remove existing partitions and to initiate a new partition table, and as an empty drive the error did not occur but I then used gparted to add partitions. Then trying to install produced the error. That failure was captured by bug id#1165292
What does the output of 'parted -s /dev/whatever u s p' look like for those disks? Does it throw an assertion for any of them?
I verified the drives were still 'Raid Ready' and here's the output. Run from running FC20 x86_64 installation. No assertions were thrown. [root ~]# parted -s /dev/sdc u s p Model: ATA ST3320620AS (scsi) Disk /dev/sdc: 625142448s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 4196351s 4194304s primary ext4 boot 3 4196352s 616753151s 612556800s extended 5 4198400s 45158399s 40960000s logical ext4 6 45160448s 86120447s 40960000s logical ext4 2 616753152s 625141759s 8388608s primary ext4 [root ~]# parted -s /dev/sdb u s p Model: ATA ST31500341AS (scsi) Disk /dev/sdb: 2930277168s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 206847s 204800s primary ntfs 2 206848s 920762367s 920555520s primary ntfs 4 948674560s 1034539007s 85864448s primary ext4 3 1034539008s 2930276351s 1895737344s extended 5 1034541056s 2281934847s 1247393792s logical ext4 6 2281936896s 2930276351s 648339456s logical ext4 [root ~]# parted -s /dev/sda u s p Model: ATA WDC WD15EADS-22P (scsi) Disk /dev/sda: 2930277168s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 2056319s 2054272s primary ext4 boot 3 2056320s 18426554s 16370235s primary linux-swap(v1) 4 88311808s 2930277167s 2841965360s extended 11 88313856s 171257855s 82944000s logical ext4 12 171259904s 276807679s 105547776s logical ext4 13 276809728s 382273535s 105463808s logical ext4 16 382275584s 570058751s 187783168s logical ext4 14 570060800s 776781823s 206721024s logical ext4 5 776783872s 881881087s 105097216s logical ext4 10 939200512s 1042022399s 102821888s logical ext4 15 1165920256s 1249284095s 83363840s logical ext4 6 1249286144s 2253107199s 1003821056s logical ext4 17 2253109248s 2377736191s 124626944s logical ext4 7 2377738240s 2582538239s 204800000s logical ext4 8 2582540288s 2788370431s 205830144s logical ext4 9 2844411904s 2930276351s 85864448s logical ext4
I'm not sure what 'raid ready' actually does (I don't have any BIOS raid systems to look at) but I suspect what happened is you ran gparted on the wrong device. Instead of the raw devices I think you should be using the /dev/mapper/pdc_* that gets created by device-mapper on boot. I'd backup any data you want to save and start over, using Anaconda to handle partitioning if possible. But if not, point gparted at the /dev/mapper/ device instead of the raw disks.
I disconnected all but the scratch (previously sdc) drives and ran from live cd: [liveuser@localhost ~]$ su -c 'parted -s /dev/mapper/pdc_ibghhjfee u s p' Error: Can't have a partition outside the disk! Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/pdc_ibghhjfee: 625011328s Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags: gparted shows an error on linux swap partition: unable to probe device: No such file or directory I created these partition from FC20 install and on that I don't have anything in /dev/mapper except /dev/mapper/control. When I create them using gparted booted live cd, it does as you suggest use mapper and it is successful For my existing system there are a lot of mounts that I have to define in the installer drive configuration, or yum upgrade to FC21. I'm hesitant to remove that raid stripe on two 1.5 TB drive full of data. My testing with the scratch drive seems to indicate it's non-destructive. But that would be the failure mode for me all things as they are: the installer would fail enumerating the drives. If it's worth any more attention I'm happy to provide any info I can. My thanks for guiding me through thus far!
(In reply to Richard S. Hendershot from comment #24) > I disconnected all but the scratch (previously sdc) drives and ran from live > cd: > > [liveuser@localhost ~]$ su -c 'parted -s /dev/mapper/pdc_ibghhjfee u s p' > Error: Can't have a partition outside the disk! > Model: Linux device-mapper (linear) (dm) > Disk /dev/mapper/pdc_ibghhjfee: 625011328s > Sector size (logical/physical): 512B/512B > Partition Table: unknown > Disk Flags: > > gparted shows an error on linux swap partition: unable to probe device: No > such file or directory > > I created these partition from FC20 install and on that I don't have > anything in /dev/mapper except /dev/mapper/control. > > When I create them using gparted booted live cd, it does as you suggest use > mapper and it is successful > > For my existing system there are a lot of mounts that I have to define in > the installer drive configuration, or yum upgrade to FC21. I'm hesitant to > remove that raid stripe on two 1.5 TB drive full of data. My testing with > the scratch drive seems to indicate it's non-destructive. But that would be > the failure mode for me all things as they are: the installer would fail > enumerating the drives. If the output in Comment 22 is from the system we are talking about here I am not sure you actually have anything even distantly resembling RAID as each disk has different partitions. I would say you have zero redundancy if that is what you are after with using RAID. I see NTFS on one of your disks - if your Windows installation expects fakeRAID (the dmraid thing) you may be heading into troubles with your data. This is clearly misconfiguration and I suggest closing as NOTABUG.
In my opinion the solution is if not used to turn RAID off in your BIOS. Otherwise you risk something in your system may attempt to fix the RAID which is now "broken". You should be thankful to the bug so installer did not destroy your data e.g. by doing re-partitioning. I am leaving it to Anaconda developers if they want to handle this case.
yes C22 is how the system is represented under fedora 20. It has /dev/sdc which is the output from C24, that when running as the only drive (for testing). The SATA controller offers raw ide, ahci and raid and I've tried this with both the latter two settings. While the drive has some kind of raid stripe (managed by raid bios) it isn't joined in a raid. My normal setting is AHCI and I don't currently use raid on this box (it was setup as-shipped) as an aside, I bought and set this up in when (2010) I had to use nodmraid parm. I think it's clear I should have removed raid metadata (but it hinted I'd lose the current windows installation if attempted). Scared. I've learned a lot through this process and my thanks and hope perhaps this discussion helps others. One suggested improvement would be parted error message shown to the user. S/he then would likely end up installing gparted and seeing that the partition wasn't found. Uncaught, s/he just reports YAB or goes away angry. I think having this metadata set on a drive that is not in a raid is a valid configuration. Since multiple setup steps are needed, and since it's ptentially destructive to change on a mature system, having it "prep'ed" as the drive is initially added is logical - but I'm no raid expert.
It is fairly difficult to detect all the possible ways for things to fail, and having old metadata hanging around is one of those. We can't know if it is valid or not. I'm going to close this as NOTABUG, and I would suggest you make sure you have good backups before anything else.
Another user experienced a similar problem: Fedora installation, PC system contains nvidia RAID, Fedore can't scan RAID drive and installation is interrupted cmdline: /usr/bin/python /sbin/anaconda --liveinst --method=livecd:///dev/mapper/live-base cmdline_file: BOOT_IMAGE=/isolinux/vmlinuz0 root=live:LABEL=Fedora-Live-WS-x86_64-21-5 ro rd.live.image quiet rhgb hashmarkername: anaconda kernel: 3.17.4-301.fc21.x86_64 other involved packages: python-blivet-0.61.13-1.fc21.noarch, python-libs-2.7.8-7.fc21.x86_64 package: anaconda-core-21.48.21-1.fc21.x86_64 packaging.log: product: Fedora" reason: DeviceTreeError: failed to scan disk nvidia_aeajhbdc release: Fedora release 21 (Twenty One) version: Fedora