Bug 730502 - lv_root detected as silicon_medley_raid_member after reboot
lv_root detected as silicon_medley_raid_member after reboot
Status: CLOSED NOTABUG
Product: Fedora
Classification: Fedora
Component: lvm2 (Show other bugs)
15
i686 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: LVM and device-mapper development team
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-08-13 16:19 EDT by Sandro Bonazzola
Modified: 2011-08-22 16:59 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-08-22 16:59:43 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Sandro Bonazzola 2011-08-13 16:19:37 EDT
I've selected lvm2 as component because it seemed the most related component but I don't really know if it is the right choice.

Description of problem:
This morning my system worked fine.
After reboot without any new install the system now tell me that no root can be found and where it should be an ext4 file system now it seems that there is a  silicon_medley_raid_member.

Executed S.M.A.R.T. short test by BIOS, everything is ok.
Using the F15 DVD I've opened a rescue shell, executed vgck and pvck and everything seemed ok.
Activated the lvm volume, called fsck.ext4 -v on vg_root, no errors found.
Called fsck.ext4 -f -v on vg_root, no errors found.

mount -t ext4 of vg_root works fine but if not specified, the fs type detected is silicon_medley_raid_member.

I'm doing a backup of all relevant data on my system but I would understand what is happened and, if possible, restore the system without reinstalling everything.

How reproducible:
Always reproducible

Any idea on what happened?
Comment 1 Milan Broz 2011-08-14 08:19:32 EDT
Seems that system now see some fake raid signature on the disk, I guess mdadm update started to recognize some old SiL RAID signature.

If you do not use raid, try to add rd_NO_MD to kernel boot parameters (or equivalent, see man dracut / dracut.kernel).
Comment 2 Sandro Bonazzola 2011-08-17 06:11:22 EDT
rd_NO_MD and rd_NO_DM already specified in grub kernel boot parameters.
The notebook has no bios raid and just 1HD, I never used RAID on this system.

It seems something like this: http://ubuntuforums.org/showthread.php?t=1711929

The only difference is that I'm using ext4 instead of ext3 so ext4 works fine if specified. Maybe ext4 and silicon_medley_raid_member have similar signature and just some bits are somehow corrupted?
Comment 3 Sandro Bonazzola 2011-08-17 06:37:17 EDT
BLKID_DEBUG=0xffff blkid on the lvm volu,e says:

libblkid: debug mask set to 0xffff.
creating blkid cache (using default cache)
need to revalidate lv_root (cache time 2147483648.0, stat time 1313584184.138354,
	time since last check 3461068058)
ready for low-probing, offset=0, size=20971520000
found entire diskname for devno 0xfd01 as dm-1
whole-disk: YES, regfile: NO
zeroize wiper
chain safeprobe superblocks ENABLED
--> starting probing loop [SUBLKS idx=-1]
[0] linux_raid_member:
	call probefunc()
	buffer read: off=20971454464 len=64
	buffer read: off=20971511808 len=256
	buffer read: off=0 len=256
	buffer read: off=4096 len=256
[1] ddf_raid_member:
	call probefunc()
	buffer read: off=20971519488 len=512
	buffer read: off=20971388416 len=512
[2] isw_raid_member:
	call probefunc()
	buffer read: off=20971518976 len=48
[3] lsi_mega_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
[4] via_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
[5] silicon_medley_raid_member:
	call probefunc()
	reuse buffer: off=20971519488 len=512
assigning TYPE [superblocks]
<-- leaving probing loop (type=silicon_medley_raid_member) [SUBLKS idx=5]
chain safeprobe topology DISABLED
chain safeprobe partitions DISABLED
zeroize wiper
returning TYPE value
    creating new cache tag head TYPE
lv_root: devno 0xfd01, type silicon_medley_raid_member
reseting probing buffers
buffers summary: 1904 bytes by 7 read() call(s)
lv_root: TYPE="silicon_medley_raid_member" 
writing cache file /etc/blkid/blkid.tab (really /etc/blkid/blkid.tab)
freeing cache struct
  freeing dev lv_root (silicon_medley_raid_member)
  dev: name = lv_root
  dev: DEVNO="0xfd01"
  dev: TIME="1313584410.543761"
  dev: PRI="0"
  dev: flags = 0x00000001
    tag: TYPE="silicon_medley_raid_member"

    freeing tag TYPE=silicon_medley_raid_member
    tag: TYPE="silicon_medley_raid_member"
    freeing tag TYPE=(NULL)
    tag: TYPE="(null)"
Comment 4 Milan Broz 2011-08-17 06:38:34 EDT
I do not think blkid detect it wrongly, perhaps just there are both signatures.

Perhaps "dmraid -E" or wipefs can wipe the fake raid signature?
Comment 5 Sandro Bonazzola 2011-08-17 06:58:10 EDT
wipefs on lv_root shows:


offset               type
----------------------------------------------------------------
0x438                ext4   [filesystem]
                     UUID:  0fb3d4f8-fea5-4d22-ae67-91e897d67c14

0x4e1fffe60          silicon_medley_raid_member   [raid]
Comment 6 Heinz Mauelshagen 2011-08-17 07:52:51 EDT
(In reply to comment #5)
> wipefs on lv_root shows:
> 
> 
> offset               type
> ----------------------------------------------------------------
> 0x438                ext4   [filesystem]
>                      UUID:  0fb3d4f8-fea5-4d22-ae67-91e897d67c14
> 
> 0x4e1fffe60          silicon_medley_raid_member   [raid]

Did you try "dmraid -E" yet, like Milan proposed?
Comment 7 Sandro Bonazzola 2011-08-17 08:04:06 EDT
dmraid -E -r  says no raid disk with name lv_root.
The offset for silicon_medley_raid_member seems to be quite high.
Is it safe to call wipefs -o 0x4e1fffe60 on lv_root?
Comment 8 Sandro Bonazzola 2011-08-17 08:47:24 EDT
called wipefs -o 0x4e1fffe60 on lv_root, now the mount command detects correctly  ext4 fs.
The system now can boot.
Any idea on what have caused this issue or any hint on where searching the cause?
Comment 9 Milan Broz 2011-08-17 09:09:17 EDT
That disk was probably part of fake raid array before and signature was still there.
After mdadm or blkid update it starts to prefer raid signature before ext4 one.

I guess the issue can be closed now, right?
Comment 10 Sandro Bonazzola 2011-08-22 16:59:43 EDT
(In reply to comment #9)
> That disk was probably part of fake raid array before and signature was still
> there.

Never used RAID on that system. If the signature was there, it was just garbage left from a previous partition not zeroed during a partition resizing / format.

> After mdadm or blkid update it starts to prefer raid signature before ext4 one.
> 
> I guess the issue can be closed now, right?

Well, the system now seems to be ok. I'm just curious about the dynamic of the incident but I can set the status to closed notabug. I've choosen notabug because I can't find any evidence that a specific package caused the issue.

Note You need to log in before you can comment on or make changes to this bug.