Hide Forgot
Description of problem: The command "lvcreate -n testlv -m 1 -l 120 test" fails to run: device-mapper: reload ioctl on (254:5) failed: Device or resource busy Failed to activate test/testlv_rmeta_0 for clearing If I reboot the system, it works. Version-Release number of selected component (if applicable): linux 4.4.6 How reproducible: It happened if I had just destroyed a md device. It did not happen, if in that boot I hadn't destroyed a md device (raid1). Steps to Reproduce: --------------------- [root@atom:/mnt]# umount raidtest/ [root@atom:/mnt]# mdadm -S /dev/md128 mdadm: stopped /dev/md128 [root@atom:/mnt]# pvcreate /dev/sda2 ^C [root@atom:/mnt]# pvcreate /dev/sda2 allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support. Falling back to native LVM signature detection. WARNING: software RAID md superblock detected on /dev/sda2. Wipe it? [y/n]: y Wiping software RAID md superblock on /dev/sda2. Incorrect metadata area header checksum on /dev/sda2 at offset 4096 ^[k Physical volume "/dev/sda2" successfully created [root@atom:/mnt]# pvcreate /dev/sdb6 allocation/use_blkid_wiping=1 configuration setting is set while LVM is not compiled with blkid wiping support. Falling back to native LVM signature detection. WARNING: software RAID md superblock detected on /dev/sdb6. Wipe it? [y/n]: y Wiping software RAID md superblock on /dev/sdb6. Incorrect metadata area header checksum on /dev/sdb6 at offset 4096 Physical volume "/dev/sdb6" successfully created [root@atom:/mnt]# vgcreate test /dev/sda2 /dev/sdb6 Volume group "test" successfully created [root@atom:/mnt]# lvcreate -n testlv -m 1 -l 120 test device-mapper: reload ioctl on (254:5) failed: Device or resource busy Failed to activate test/testlv_rmeta_0 for clearing --------------------
dmesg part of the commands above: [17071.188917] md128: detected capacity change from 536281088 to 0 [17071.188937] md: md128 stopped. [17071.188951] md: unbind<sda2> [17071.198130] md: export_rdev(sda2) [17071.198224] md: unbind<sdb6> [17071.206077] md: export_rdev(sdb6) [17083.242651] md: bind<sda2> [17112.599162] device-mapper: table: 254:5: linear: Device lookup failed [17112.599291] device-mapper: ioctl: error adding target to table [root@atom:/mnt]# dmsetup info Name: teraraid-teraraid_main State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 4 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAlNtPhiAhrg0utXGkhfEcFyasgMEpLt0w Name: teraraid-teraraid_main_rimage_1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 3 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAdXT1HBv1G65AxEX1MOjruDVJ2Hjg6TpX Name: teraraid-teraraid_main_rimage_0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 1 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAM3cCcG0m7FaWS1NzmNS4V0qNtnYL9g4u Name: teraraid-teraraid_main_rmeta_1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 2 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoA6r7udR5eoR2Du5IHQEfDxzP7ZPfF1F2L Name: teraraid-teraraid_main_rmeta_0 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 1 Event number: 0 Major, minor: 254, 0 Number of targets: 1 UUID: LVM-GsabIKvLx2SjOUMXqnQSsJd5XoYIDjoAKxtJ0iyEslX31TMzud42XIOzo4gXsZ9f
Created attachment 1153031 [details] lvcreate log with -vvvvv I attach the log of "lvcreate -vvvvvv -n testlv -m 1 -l 120 test".
kabi helped me find this out....... On pvcreate: Incorrect metadata area header checksum on /dev/sda2 at offset 4096 It seems that after stopping the md128 raid, something made it run it again, partially. I tried to wipe the pvs: [root@atom:~]# wipefs -a /dev/sdb6 /dev/sdb6: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31 [root@atom:~]# wipefs -a /dev/sda2 wipefs: error: /dev/sda2: probing initialization failed: Device or resource busy Then, /proc/mdstat showed: md128 : inactive sda2[0](S) 523760 blocks super 1.2 Stopping it again (mdamd -S md128) made wipefs work. After wipefs, the pvcreate/vgcreate/lvcreate -m1 chain worked fine.
After 'mdadm --stop' user needs to somehow wait till mdarray really stops. Using 'udevadm settle' usually does the trick - but it's not right answer - as the mdadm should rather wait till kernel is finished iwth STOP_ARRAY ioctl.
This is not an LVM bug. If it is still a problem, re-open against 'mdadm'.