Description of problem: I have installed both Fedora Core 4 and Fedora Core Rawhide (updated from Test3) . On the rawhide I mount my FC4 partitions into /mnt directory. I have had no problems with this when using FC5T2 + rawhide updates, but after performing clean install of FC5T3 I'm no longer able to mount those partitions. Both Anaconda installer and Rescue disk detect my 2nd HDD as mapper/pdc_ififgffp2 instead of /dev/hdc2 (my previos *working* installation of Test2 was behaving the same). The only change I made during Test3 install was to chose XEN packages, but the problem remains even though I installed regular kernel on my system. I tried to mount the partions via both /dev/hdc[123] and LABEL=[/boot|/home|/usr] or even /mapper/ Version-Release number of selected component (if applicable): kernel-xen-hypervisor-2.6.15-1.1955_FC5 kernel-xen-hypervisor-2.6.15-1.1975_FC5 kernel-2.6.15-1.1975_FC5 How reproducible: Always Steps to Reproduce: 1. Add exisiting ext3 partitions to fstab 2. Reboot or do mount -a 3. Booting fails (with HDD error messages) or mount throws invalid block device error Actual results: Unable to mount additional partitions Expected results: Partiotions mountes in /mnt as specified in fstab Additional info: [root@athlon64 ~]# fdisk -l Disk /dev/hda: 200.0 GB, 200049647616 bytes 255 heads, 63 sectors/track, 24321 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 2611 20972826 7 HPFS/NTFS /dev/hda2 2612 19535 135942030 f W95 Ext'd (LBA) /dev/hda3 19536 24104 36700492+ 83 Linux /dev/hda4 24105 24321 1743052+ 82 Linux swap / Solaris /dev/hda5 2612 6481 31085743+ 83 Linux /dev/hda6 6482 19535 104856223+ b W95 FAT32 Disk /dev/hdc: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 * 1 16 128488+ 83 Linux /dev/hdc2 17 1974 15727635 83 Linux /dev/hdc3 1975 14593 101362117+ 83 Linux [root@athlon64 ~]# cat /etc/fstab LABEL=/12 / ext3 defaults 1 1 devpts /dev/pts devpts gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs defaults 0 0 #/dev/hdc2 /mnt/fc4 ext3 defaults 1 2 #/dev/mapper/pdc_ififgffap1 /mnt/fc4boot ext3 defaults 1 2 #/dev/hdc3 /mnt/fc4home reiserfs defaults 1 2 #/dev/hda3 /mnt/fc4usr ext3 defaults 1 2 proc /proc proc defaults 0 0 sysfs /sys sysfs defaults 0 0 LABEL=SWAP-hda4 swap swap defaults 0 0 [root@athlon64 ~]# mount -a mount: /dev/hdc2 is not a valid block device [root@athlon64 ~]# rpm -qa |grep kernel kernel-xen-hypervisor-2.6.15-1.1955_FC5 kernel-xen-hypervisor-2.6.15-1.1975_FC5 kernel-2.6.15-1.1975_FC5 [root@athlon64 ~]# uname -r 2.6.15-1.1975_FC5 [root@athlon64 ~]# ls /dev/mapper control pdc_ififgffa All of those commented out in fstab fail to mount
Created attachment 125149 [details] dmesg
Created attachment 125151 [details] messages
Update. The issue doesn't seem to be related to xen or kernel. I did a clean reistall of FC5T3 with default package selection. I found out that I'm able to mount the problematic hdd after booting FC5T3 recuecd. Although the drive isn't available through /dev/hdc* I could mount it using the following command: mount /dev/mapper/pdc_ififgffap1 /mnt/fc4boot Unfortunately this device is unavalable after booting to the installed FC5T3. The following are different outputs between rescue and installed version: The installed FC5T3 version: [maners@athlon64 ~]$ ls -l /dev/mapper total 0 crw------- 1 root root 10, 63 Mar 1 13:33 control brw-rw---- 1 root disk 253, 0 Mar 1 13:33 pdc_ififgffa [maners@athlon64 ~]$ And screen photo in rescue mode: http://maners.no-ip.com/ls_mapper_rescue.jpg I also did fsck in rescue mode on those pdc_* devices and it all went ok with no errors. Addtionaly to the issue with my HDD I was unable to downalod any pictures from my camera in FC5T3 and recived the following error message in gThumb: An error occurred in the io-library ('Could not claim the USB device'): Could not claim interface 0 (Operation not permitted). Make sure no other program or kernel module (such as sdc2xx, stv680, spca50x) is using the device and you have read/write access to the device. In conlusion, there must be something wrong with either with my hardware (I doubt it because it works perfectly since FC2) or some system component responsible for h/w detection. I'm out of ideas what else I could test to troubleshoot this, so if this would help I can provide ssh access to my box
I forgot to mention that I could do hdparm -tT /dev/hdc on the installed system and other tools such as gnome volume manager and gparted can see partitions on hdc but are not able to read the filesystem
Created attachment 125578 [details] acaconda.log I have found in my anaconda log that it uses dmraid to mount the disk. I don't know why because I neve set up any raid on my HDD and onbouard raid controller is disabled in BIOS. I tried to pass dmraid to the kernel arguments in grub but it didn't help. Here is the interesting part from anaconda.log: 16:28:53 DEBUG : starting dmraids 16:28:53 DEBUG : self.driveList(): ['hda', 'hdc'] 16:28:53 DEBUG : DiskSet.skippedDisks: [] 16:28:53 DEBUG : DiskSet.skippedDisks: [] 16:28:53 DEBUG : starting all dmraids on drives ['hda', 'hdc'] 16:28:53 DEBUG : scanning for dmraid on drives ['hda', 'hdc'] 16:28:54 DEBUG : got raidset <block.device.RaidSet instance at 0x2ae2f2b62170> (hdc) 16:28:54 DEBUG : valid: True found_devs: 1 total_devs: 1 16:28:54 DEBUG : adding mapper/pdc_ififgffa to isys cache 16:28:54 DEBUG : adding hdc to dmraid cache 16:28:54 DEBUG : removing hdc from isys cache 16:28:54 DEBUG : starting raid <block.device.RaidSet instance at 0x2ae2f2b62170> with mknod=True 16:28:54 DEBUG : done starting dmraids. Drivelist: 16:28:54 DEBUG : hda 16:28:54 DEBUG : mapper/pdc_ififgffa 16:28:55 DEBUG : isys.py:mount()- going to mount /tmp/hda5 on /mnt/sysimage 16:28:55 DEBUG : isys.py:mount()- going to mount /tmp/hda3 on /mnt/sysimage 16:28:55 DEBUG : isys.py:mount()- going to mount /tmp/mapper/pdc_ififgffap1 on /mnt/sysimage 16:28:55 DEBUG : isys.py:mount()- going to mount /tmp/mapper/pdc_ififgffap2 on /mnt/sysimage 16:28:55 DEBUG : isys.py:mount()- going to mount /tmp/mapper/pdc_ififgffap3 on /mnt/sysimage 16:29:02 INFO : moving (1) to step findinstall 16:29:06 INFO : moving (1) to step partitionobjinit 16:29:06 DEBUG : self.driveList(): ['hda', 'mapper/pdc_ififgffa']
Passing 'nodmraid' will disable dmraid support, but there's definitely metadata on the disks to make us think that it should be RAID'd
Yay, it seems that I solved my problem :-) [root@athlon64 ~]# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 63 Mar 9 13:17 control brw-rw---- 1 root disk 253, 0 Mar 9 13:17 pdc_ififgffa [root@athlon64 ~]# dmraid -r /dev/hdc: pdc, "pdc_ififgffa", stripe, ok, 234441472 sectors, data@ 0 ..and then: [root@athlon64 ~]# dmraid -a y RAID set "pdc_ififgffa" already active [root@athlon64 ~]# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 63 Mar 9 13:17 control brw-rw---- 1 root disk 253, 0 Mar 9 13:17 pdc_ififgffa brw-rw---- 1 root disk 253, 1 Mar 9 18:39 pdc_ififgffa1 brw-rw---- 1 root disk 253, 2 Mar 9 18:39 pdc_ififgffa2 brw-rw---- 1 root disk 253, 3 Mar 9 18:39 pdc_ififgffa3 I have no idea how is that my HDD is RAIDed, but I suspect that it's either my motherboard (kernel detects sata_promise although I have the SATA and RIAD contrllers are disabled in BIOS) or some partitioning tool made a mess on my HDD. Anyway another NOTABUG CLOSE :-)
I installed yesterday FC5 Final and I typed "linux nodmraid" in the CD boot prompt and it now behaves as FC4 did: no more pdc_ififgffa devices and I can mount them with no problem using good old /dev/hdcX method
I'm closing this bug as CANTFIX because I no longer use this drive and also I found out a while ago that there's indeed some RAID meta-data stored on the drive (checked with dmraid utility) - I must have plugged it to the RAID controller and simply didn't remember it. I consider it partly as my own mistake. Partly, because I believe that the kernel should not rely only on the meta-data and also detect the physical connection and act accordingly - I might be wrong about this though. If you feel that this issue should be further worked on I can still do some testing and provide feedback if needed.