Bug 740636 - hal/udev not activating partitions at start
Summary: hal/udev not activating partitions at start
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: dmraid
Version: 15
Hardware: i686
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-22 18:48 UTC by cristi falcas
Modified: 2011-10-05 14:08 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-10-05 14:08:12 UTC


Attachments (Terms of Use)

Description cristi falcas 2011-09-22 18:48:35 UTC
Description of problem:
The machine has 2 disks with the following configuration:
/dev/sda1            2048   390721967   195359960   8e  Linux LVM
and
/dev/sdc1            2048     1026047      512000   83  Linux
/dev/sdc2         1026048   156301487    77637720   8e  Linux LVM

The boot partition is on sdc1 and the root on a LV on sdc2. The LVs from sda are in fstab and needed to be mounted at boot time.
When the machine boots, the partition from disk sda is not visible by the system. When I'm prompted to enter the root password the fix the fstab problems, I have 2 options to fix this:
- fdisk /dev/sda and enter "w"
- partprobe /dev/sda
If I try with kpartx -a /dev/sda, nothing is happening.

Version-Release number of selected component (if applicable):
hal 0.5.14-6.fc15
udev 167-6.fc15

How reproducible:
It happens every time on my computer with 2 identical disks

Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Harald Hoyer 2011-09-23 08:05:38 UTC
do you have a stale raid signature on /dev/sda?

check with:

# udevadm info --query=property --name=/dev/sda|grep ID_FS_TYPE

and see if it shows *_raid_member.

you can get rid of it with:

# mdadm --zero-superblock /dev/sda

Comment 2 cristi falcas 2011-09-23 09:54:08 UTC
You are correct:
ID_FS_TYPE=adaptec_raid_member

The mdadm command will delete my LVM info also? My question is if I will loose data after that command or not.

Comment 3 Harald Hoyer 2011-09-23 10:07:34 UTC
(In reply to comment #2)
> You are correct:
> ID_FS_TYPE=adaptec_raid_member
> 
> The mdadm command will delete my LVM info also? My question is if I will loose
> data after that command or not.

oh.. mdadm would not clear that raid info...

dmraid might do this with:

# dmraid -E /dev/sda

I'll reassign to dmraid. Maybe they can provide more info of how to get rid of that signature.

Comment 4 cristi falcas 2011-10-01 20:29:29 UTC
I think I have the same problem as in bug 517761.

I tried:
dmraid -E /dev/sda
dmraid -r -E /dev/sda
dmraid -x /dev/sda
all with the same result:
ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sda

I gave up eventually and zeroed the entire disk. From what I've read, it seems that the raid information is somewhere at the end of the disk and I didn't know what exactly to delete.

Thank you for you support.

Comment 5 Harald Hoyer 2011-10-04 10:46:54 UTC
(In reply to comment #4)
> I think I have the same problem as in bug 517761.
> 
> I tried:
> dmraid -E /dev/sda
> dmraid -r -E /dev/sda
> dmraid -x /dev/sda
> all with the same result:
> ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on
> /dev/sda
> 
> I gave up eventually and zeroed the entire disk. From what I've read, it seems
> that the raid information is somewhere at the end of the disk and I didn't know
> what exactly to delete.
> 
> Thank you for you support.

just for the reference: 

I found another tool, which cleans signatures:

# man wipefs

Comment 6 cristi falcas 2011-10-05 07:53:26 UTC
I've run wipefs on the sdc disk that also had the exact same issue and it cleaned the signature correctly.


Note You need to log in before you can comment on or make changes to this bug.