Bug 730502 - lv_root detected as silicon_medley_raid_member after reboot
Summary: lv_root detected as silicon_medley_raid_member after reboot
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: lvm2
Version: 15
Hardware: i686
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: LVM and device-mapper development team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-13 20:19 UTC by Sandro Bonazzola
Modified: 2011-08-22 20:59 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-08-22 20:59:43 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Sandro Bonazzola 2011-08-13 20:19:37 UTC
I've selected lvm2 as component because it seemed the most related component but I don't really know if it is the right choice.

Description of problem:
This morning my system worked fine.
After reboot without any new install the system now tell me that no root can be found and where it should be an ext4 file system now it seems that there is a  silicon_medley_raid_member.

Executed S.M.A.R.T. short test by BIOS, everything is ok.
Using the F15 DVD I've opened a rescue shell, executed vgck and pvck and everything seemed ok.
Activated the lvm volume, called fsck.ext4 -v on vg_root, no errors found.
Called fsck.ext4 -f -v on vg_root, no errors found.

mount -t ext4 of vg_root works fine but if not specified, the fs type detected is silicon_medley_raid_member.

I'm doing a backup of all relevant data on my system but I would understand what is happened and, if possible, restore the system without reinstalling everything.

How reproducible:
Always reproducible

Any idea on what happened?

Comment 1 Milan Broz 2011-08-14 12:19:32 UTC
Seems that system now see some fake raid signature on the disk, I guess mdadm update started to recognize some old SiL RAID signature.

If you do not use raid, try to add rd_NO_MD to kernel boot parameters (or equivalent, see man dracut / dracut.kernel).

Comment 2 Sandro Bonazzola 2011-08-17 10:11:22 UTC
rd_NO_MD and rd_NO_DM already specified in grub kernel boot parameters.
The notebook has no bios raid and just 1HD, I never used RAID on this system.

It seems something like this: http://ubuntuforums.org/showthread.php?t=1711929

The only difference is that I'm using ext4 instead of ext3 so ext4 works fine if specified. Maybe ext4 and silicon_medley_raid_member have similar signature and just some bits are somehow corrupted?

Comment 3 Sandro Bonazzola 2011-08-17 10:37:17 UTC
BLKID_DEBUG=0xffff blkid on the lvm volu,e says:

libblkid: debug mask set to 0xffff.
creating blkid cache (using default cache)
need to revalidate lv_root (cache time 2147483648.0, stat time 1313584184.138354,
	time since last check 3461068058)
ready for low-probing, offset=0, size=20971520000
found entire diskname for devno 0xfd01 as dm-1
whole-disk: YES, regfile: NO
zeroize wiper
chain safeprobe superblocks ENABLED
--> starting probing loop [SUBLKS idx=-1]
[0] linux_raid_member:
	call probefunc()
	buffer read: off=20971454464 len=64
	buffer read: off=20971511808 len=256
	buffer read: off=0 len=256
	buffer read: off=4096 len=256
[1] ddf_raid_member:
	call probefunc()
	buffer read: off=20971519488 len=512
	buffer read: off=20971388416 len=512
[2] isw_raid_member:
	call probefunc()
	buffer read: off=20971518976 len=48
[3] lsi_mega_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
[4] via_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
[5] silicon_medley_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
assigning TYPE [superblocks]
<-- leaving probing loop (type=silicon_medley_raid_member) [SUBLKS idx=5]
chain safeprobe topology DISABLED
chain safeprobe partitions DISABLED
zeroize wiper
returning TYPE value
    creating new cache tag head TYPE
lv_root: devno 0xfd01, type silicon_medley_raid_member
reseting probing buffers
buffers summary: 1904 bytes by 7 read() call(s)
lv_root: TYPE="silicon_medley_raid_member" 
writing cache file /etc/blkid/blkid.tab (really /etc/blkid/blkid.tab)
freeing cache struct
  freeing dev lv_root (silicon_medley_raid_member)
  dev: name = lv_root
  dev: DEVNO="0xfd01"
  dev: TIME="1313584410.543761"
  dev: PRI="0"
  dev: flags = 0x00000001
    tag: TYPE="silicon_medley_raid_member"

    freeing tag TYPE=silicon_medley_raid_member
    tag: TYPE="silicon_medley_raid_member"
    freeing tag TYPE=(NULL)
    tag: TYPE="(null)"

Comment 4 Milan Broz 2011-08-17 10:38:34 UTC
I do not think blkid detect it wrongly, perhaps just there are both signatures.

Perhaps "dmraid -E" or wipefs can wipe the fake raid signature?

Comment 5 Sandro Bonazzola 2011-08-17 10:58:10 UTC
wipefs on lv_root shows:


offset               type
----------------------------------------------------------------
0x438                ext4   [filesystem]
                     UUID:  0fb3d4f8-fea5-4d22-ae67-91e897d67c14

0x4e1fffe60          silicon_medley_raid_member   [raid]

Comment 6 Heinz Mauelshagen 2011-08-17 11:52:51 UTC
(In reply to comment #5)
> wipefs on lv_root shows:
> 
> 
> offset               type
> ----------------------------------------------------------------
> 0x438                ext4   [filesystem]
>                      UUID:  0fb3d4f8-fea5-4d22-ae67-91e897d67c14
> 
> 0x4e1fffe60          silicon_medley_raid_member   [raid]

Did you try "dmraid -E" yet, like Milan proposed?

Comment 7 Sandro Bonazzola 2011-08-17 12:04:06 UTC
dmraid -E -r  says no raid disk with name lv_root.
The offset for silicon_medley_raid_member seems to be quite high.
Is it safe to call wipefs -o 0x4e1fffe60 on lv_root?

Comment 8 Sandro Bonazzola 2011-08-17 12:47:24 UTC
called wipefs -o 0x4e1fffe60 on lv_root, now the mount command detects correctly  ext4 fs.
The system now can boot.
Any idea on what have caused this issue or any hint on where searching the cause?

Comment 9 Milan Broz 2011-08-17 13:09:17 UTC
That disk was probably part of fake raid array before and signature was still there.
After mdadm or blkid update it starts to prefer raid signature before ext4 one.

I guess the issue can be closed now, right?

Comment 10 Sandro Bonazzola 2011-08-22 20:59:43 UTC
(In reply to comment #9)
> That disk was probably part of fake raid array before and signature was still
> there.

Never used RAID on that system. If the signature was there, it was just garbage left from a previous partition not zeroed during a partition resizing / format.

> After mdadm or blkid update it starts to prefer raid signature before ext4 one.
> 
> I guess the issue can be closed now, right?

Well, the system now seems to be ok. I'm just curious about the dynamic of the incident but I can set the status to closed notabug. I've choosen notabug because I can't find any evidence that a specific package caused the issue.


Note You need to log in before you can comment on or make changes to this bug.