Bug 730502
| Summary: | lv_root detected as silicon_medley_raid_member after reboot | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Sandro Bonazzola <sandro.bonazzola> |
| Component: | lvm2 | Assignee: | LVM and device-mapper development team <lvm-team> |
| Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 15 | CC: | agk, bmarzins, bmr, dwysocha, heinzm, jonathan, kzak, lvm-team, mbroz, msnitzer, prajnoha, prockai, zkabelac |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | i686 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-08-22 20:59:43 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Sandro Bonazzola
2011-08-13 20:19:37 UTC
Seems that system now see some fake raid signature on the disk, I guess mdadm update started to recognize some old SiL RAID signature. If you do not use raid, try to add rd_NO_MD to kernel boot parameters (or equivalent, see man dracut / dracut.kernel). rd_NO_MD and rd_NO_DM already specified in grub kernel boot parameters. The notebook has no bios raid and just 1HD, I never used RAID on this system. It seems something like this: http://ubuntuforums.org/showthread.php?t=1711929 The only difference is that I'm using ext4 instead of ext3 so ext4 works fine if specified. Maybe ext4 and silicon_medley_raid_member have similar signature and just some bits are somehow corrupted? BLKID_DEBUG=0xffff blkid on the lvm volu,e says:
libblkid: debug mask set to 0xffff.
creating blkid cache (using default cache)
need to revalidate lv_root (cache time 2147483648.0, stat time 1313584184.138354,
time since last check 3461068058)
ready for low-probing, offset=0, size=20971520000
found entire diskname for devno 0xfd01 as dm-1
whole-disk: YES, regfile: NO
zeroize wiper
chain safeprobe superblocks ENABLED
--> starting probing loop [SUBLKS idx=-1]
[0] linux_raid_member:
call probefunc()
buffer read: off=20971454464 len=64
buffer read: off=20971511808 len=256
buffer read: off=0 len=256
buffer read: off=4096 len=256
[1] ddf_raid_member:
call probefunc()
buffer read: off=20971519488 len=512
buffer read: off=20971388416 len=512
[2] isw_raid_member:
call probefunc()
buffer read: off=20971518976 len=48
[3] lsi_mega_raid_member:
call probefunc()
reuse buffer: off=20971519488 len=512
[4] via_raid_member:
call probefunc()
reuse buffer: off=20971519488 len=512
[5] silicon_medley_raid_member:
call probefunc()
reuse buffer: off=20971519488 len=512
assigning TYPE [superblocks]
<-- leaving probing loop (type=silicon_medley_raid_member) [SUBLKS idx=5]
chain safeprobe topology DISABLED
chain safeprobe partitions DISABLED
zeroize wiper
returning TYPE value
creating new cache tag head TYPE
lv_root: devno 0xfd01, type silicon_medley_raid_member
reseting probing buffers
buffers summary: 1904 bytes by 7 read() call(s)
lv_root: TYPE="silicon_medley_raid_member"
writing cache file /etc/blkid/blkid.tab (really /etc/blkid/blkid.tab)
freeing cache struct
freeing dev lv_root (silicon_medley_raid_member)
dev: name = lv_root
dev: DEVNO="0xfd01"
dev: TIME="1313584410.543761"
dev: PRI="0"
dev: flags = 0x00000001
tag: TYPE="silicon_medley_raid_member"
freeing tag TYPE=silicon_medley_raid_member
tag: TYPE="silicon_medley_raid_member"
freeing tag TYPE=(NULL)
tag: TYPE="(null)"
I do not think blkid detect it wrongly, perhaps just there are both signatures. Perhaps "dmraid -E" or wipefs can wipe the fake raid signature? wipefs on lv_root shows:
offset type
----------------------------------------------------------------
0x438 ext4 [filesystem]
UUID: 0fb3d4f8-fea5-4d22-ae67-91e897d67c14
0x4e1fffe60 silicon_medley_raid_member [raid]
(In reply to comment #5) > wipefs on lv_root shows: > > > offset type > ---------------------------------------------------------------- > 0x438 ext4 [filesystem] > UUID: 0fb3d4f8-fea5-4d22-ae67-91e897d67c14 > > 0x4e1fffe60 silicon_medley_raid_member [raid] Did you try "dmraid -E" yet, like Milan proposed? dmraid -E -r says no raid disk with name lv_root. The offset for silicon_medley_raid_member seems to be quite high. Is it safe to call wipefs -o 0x4e1fffe60 on lv_root? called wipefs -o 0x4e1fffe60 on lv_root, now the mount command detects correctly ext4 fs. The system now can boot. Any idea on what have caused this issue or any hint on where searching the cause? That disk was probably part of fake raid array before and signature was still there. After mdadm or blkid update it starts to prefer raid signature before ext4 one. I guess the issue can be closed now, right? (In reply to comment #9) > That disk was probably part of fake raid array before and signature was still > there. Never used RAID on that system. If the signature was there, it was just garbage left from a previous partition not zeroed during a partition resizing / format. > After mdadm or blkid update it starts to prefer raid signature before ext4 one. > > I guess the issue can be closed now, right? Well, the system now seems to be ok. I'm just curious about the dynamic of the incident but I can set the status to closed notabug. I've choosen notabug because I can't find any evidence that a specific package caused the issue. |