| Summary: | hal/udev not activating partitions at start | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | cristi falcas <cristi.falcas> |
| Component: | dmraid | Assignee: | LVM and device-mapper development team <lvm-team> |
| Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 15 | CC: | agk, bmr, cristi.falcas, dwysocha, harald, hdegoede, heinzm, jonathan, kay, lvm-team, mbroz, prockai, zkabelac |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | i686 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2011-10-05 14:08:12 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
cristi falcas
2011-09-22 18:48:35 UTC
do you have a stale raid signature on /dev/sda? check with: # udevadm info --query=property --name=/dev/sda|grep ID_FS_TYPE and see if it shows *_raid_member. you can get rid of it with: # mdadm --zero-superblock /dev/sda You are correct: ID_FS_TYPE=adaptec_raid_member The mdadm command will delete my LVM info also? My question is if I will loose data after that command or not. (In reply to comment #2) > You are correct: > ID_FS_TYPE=adaptec_raid_member > > The mdadm command will delete my LVM info also? My question is if I will loose > data after that command or not. oh.. mdadm would not clear that raid info... dmraid might do this with: # dmraid -E /dev/sda I'll reassign to dmraid. Maybe they can provide more info of how to get rid of that signature. I think I have the same problem as in bug 517761. I tried: dmraid -E /dev/sda dmraid -r -E /dev/sda dmraid -x /dev/sda all with the same result: ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on /dev/sda I gave up eventually and zeroed the entire disk. From what I've read, it seems that the raid information is somewhere at the end of the disk and I didn't know what exactly to delete. Thank you for you support. (In reply to comment #4) > I think I have the same problem as in bug 517761. > > I tried: > dmraid -E /dev/sda > dmraid -r -E /dev/sda > dmraid -x /dev/sda > all with the same result: > ERROR: asr: Invalid magic number in RAID table; saw 0x0, expected 0x900765C4 on > /dev/sda > > I gave up eventually and zeroed the entire disk. From what I've read, it seems > that the raid information is somewhere at the end of the disk and I didn't know > what exactly to delete. > > Thank you for you support. just for the reference: I found another tool, which cleans signatures: # man wipefs I've run wipefs on the sdc disk that also had the exact same issue and it cleaned the signature correctly. |