| Summary: | Cannot lvcreate with 1 mirror, after destroying md device | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Community] LVM and device-mapper | Reporter: | viric | ||||
| Component: | device-mapper | Assignee: | LVM and device-mapper development team <lvm-team> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 2.02.140 | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac | ||||
| Target Milestone: | --- | Flags: | rule-engine:
lvm-technical-solution?
|
||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-08-19 22:15:01 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
|
Description
viric
2016-05-02 19:36:08 UTC
dmesg part of the commands above: [17071.188917] md128: detected capacity change from 536281088 to 0 [17071.188937] md: md128 stopped. [17071.188951] md: unbind<sda2> [17071.198130] md: export_rdev(sda2) [17071.198224] md: unbind<sdb6> [17071.206077] md: export_rdev(sdb6) [17083.242651] md: bind<sda2> [17112.599162] device-mapper: table: 254:5: linear: Device lookup failed [17112.599291] device-mapper: ioctl: error adding target to table [root@atom:/mnt]# dmsetup info Name: teraraid-teraraid_main State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 4 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAlNtPhiAhrg0utXGkhfEcFyasgMEpLt0w Name: teraraid-teraraid_main_rimage_1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 3 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAdXT1HBv1G65AxEX1MOjruDVJ2Hjg6TpX Name: teraraid-teraraid_main_rimage_0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 1 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAM3cCcG0m7FaWS1NzmNS4V0qNtnYL9g4u Name: teraraid-teraraid_main_rmeta_1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 2 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoA6r7udR5eoR2Du5IHQEfDxzP7ZPfF1F2L Name: teraraid-teraraid_main_rmeta_0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 0 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAKxtJ0iyEslX31TMzud42XIOzo4gXsZ9f Created attachment 1153031 [details]
lvcreate log with -vvvvv
I attach the log of "lvcreate -vvvvvv -n testlv -m 1 -l 120 test".
kabi helped me find this out.......
On pvcreate: Incorrect metadata area header checksum on /dev/sda2 at offset 4096
It seems that after stopping the md128 raid, something made it run it again, partially.
I tried to wipe the pvs:
[root@atom:~]# wipefs -a /dev/sdb6
/dev/sdb6: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
[root@atom:~]# wipefs -a /dev/sda2
wipefs: error: /dev/sda2: probing initialization failed: Device or resource busy
Then, /proc/mdstat showed:
md128 : inactive sda2[0](S)
523760 blocks super 1.2
Stopping it again (mdamd -S md128) made wipefs work.
After wipefs, the pvcreate/vgcreate/lvcreate -m1 chain worked fine.
After 'mdadm --stop' user needs to somehow wait till mdarray really stops. Using 'udevadm settle' usually does the trick - but it's not right answer - as the mdadm should rather wait till kernel is finished iwth STOP_ARRAY ioctl. This is not an LVM bug. If it is still a problem, re-open against 'mdadm'. |