Bug 1332294 - Cannot lvcreate with 1 mirror, after destroying md device
Summary: Cannot lvcreate with 1 mirror, after destroying md device
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: device-mapper
Version: 2.02.140
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-02 19:36 UTC by viric
Modified: 2019-08-19 22:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-19 22:15:01 UTC
rule-engine: lvm-technical-solution?


Attachments (Terms of Use)
lvcreate log with -vvvvv (77.47 KB, text/plain)
2016-05-02 19:41 UTC, viric
no flags Details

Description viric 2016-05-02 19:36:08 UTC
Description of problem:

The command "lvcreate -n testlv -m 1 -l 120 test" fails to run:
  device-mapper: reload ioctl on (254:5) failed: Device or resource busy
  Failed to activate test/testlv_rmeta_0 for clearing

If I reboot the system, it works.


Version-Release number of selected component (if applicable):
linux 4.4.6

How reproducible:
It happened if I had just destroyed a md device. It did not happen, if in that boot I hadn't destroyed a md device (raid1).

Steps to Reproduce:

---------------------
[root@atom:/mnt]# umount raidtest/

[root@atom:/mnt]# mdadm -S /dev/md128 
mdadm: stopped /dev/md128

[root@atom:/mnt]# pvcreate /dev/sda2 ^C

[root@atom:/mnt]# pvcreate /dev/sda2 
  allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support.
  Falling back to native LVM signature detection.
WARNING: software RAID md superblock detected on /dev/sda2. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sda2.
  Incorrect metadata area header checksum on /dev/sda2 at offset 4096
^[k  Physical volume "/dev/sda2" successfully created

[root@atom:/mnt]# pvcreate /dev/sdb6 
  allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support.
  Falling back to native LVM signature detection.
WARNING: software RAID md superblock detected on /dev/sdb6. Wipe it? [y/n]: y
  Wiping software RAID md superblock on /dev/sdb6.
  Incorrect metadata area header checksum on /dev/sdb6 at offset 4096
  Physical volume "/dev/sdb6" successfully created

[root@atom:/mnt]# vgcreate test /dev/sda2 /dev/sdb6
  Volume group "test" successfully created

[root@atom:/mnt]# lvcreate -n testlv -m 1 -l 120 test
  device-mapper: reload ioctl on (254:5) failed: Device or resource busy
  Failed to activate test/testlv_rmeta_0 for clearing
--------------------

Comment 1 viric 2016-05-02 19:37:41 UTC
dmesg part of the commands above:
[17071.188917] md128: detected capacity change from 536281088 to 0
[17071.188937] md: md128 stopped.
[17071.188951] md: unbind<sda2>
[17071.198130] md: export_rdev(sda2)
[17071.198224] md: unbind<sdb6>
[17071.206077] md: export_rdev(sdb6)
[17083.242651] md: bind<sda2>
[17112.599162] device-mapper: table: 254:5: linear: Device lookup failed
[17112.599291] device-mapper: ioctl: error adding target to table

[root@atom:/mnt]# dmsetup info
Name:              teraraid-teraraid_main
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      254, 4
Number of targets: 1
UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAlNtPhiAhrg0utXGkhfEcFyasgMEpLt0w

Name:              teraraid-teraraid_main_rimage_1
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      254, 3
Number of targets: 1
UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAdXT1HBv1G65AxEX1MOjruDVJ2Hjg6TpX

Name:              teraraid-teraraid_main_rimage_0
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      254, 1
Number of targets: 1
UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAM3cCcG0m7FaWS1NzmNS4V0qNtnYL9g4u

Name:              teraraid-teraraid_main_rmeta_1
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      254, 2
Number of targets: 1
UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoA6r7udR5eoR2Du5IHQEfDxzP7ZPfF1F2L

Name:              teraraid-teraraid_main_rmeta_0
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      254, 0
Number of targets: 1
UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAKxtJ0iyEslX31TMzud42XIOzo4gXsZ9f

Comment 2 viric 2016-05-02 19:41:21 UTC
Created attachment 1153031 [details]
lvcreate log with -vvvvv

I attach the log of "lvcreate -vvvvvv -n testlv -m 1 -l 120 test".

Comment 3 viric 2016-05-02 20:27:24 UTC
kabi helped me find this out.......

On pvcreate: Incorrect metadata area header checksum on /dev/sda2 at offset 4096

It seems that after stopping the md128 raid, something made it run it again, partially.

I tried to wipe the pvs:

[root@atom:~]# wipefs -a /dev/sdb6
/dev/sdb6: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31

[root@atom:~]# wipefs -a /dev/sda2 
wipefs: error: /dev/sda2: probing initialization failed: Device or resource busy


Then, /proc/mdstat showed:
md128 : inactive sda2[0](S)
      523760 blocks super 1.2

Stopping it again (mdamd -S md128) made wipefs work.

After wipefs, the pvcreate/vgcreate/lvcreate -m1 chain worked fine.

Comment 4 Zdenek Kabelac 2016-05-04 15:23:27 UTC
After 'mdadm --stop'   user needs to somehow wait till mdarray really stops.


Using  'udevadm settle' usually does the trick - but it's not right answer - as the mdadm should rather wait till kernel is finished iwth STOP_ARRAY ioctl.

Comment 5 Jonathan Earl Brassow 2019-08-19 22:15:01 UTC
This is not an LVM bug.  If it is still a problem, re-open against 'mdadm'.


Note You need to log in before you can comment on or make changes to this bug.