Hide Forgot
description: cannot create lvm volume in logical pool if there is existing signature on that device versions: libvirt-7.0.0-9.module+el8.4.0+10326+5e50a3b6.x86_64 How reproducible: 100% Steps: 1. prepare a local block device (/dev/sdb in my env) [root@dell-per740-01 ~]# lsscsi [15:0:0:0] disk LIO-ORG device.logical- 4.0 /dev/sdb 2. prepare a logical pool from /dev/sdb [root@dell-per740-01 ~]# virsh pool-define-as --name test-pool --type logical --source-dev /dev/sdb Pool test-pool defined [root@dell-per740-01 ~]# virsh pool-build test-pool --overwrite Pool test-pool built [root@dell-per740-01 ~]# virsh pool-start test-pool Pool test-pool started 3. create a lvm from this pool, named 'vol1' [root@dell-per740-01 ~]# virsh vol-create-as --pool test-pool vol1 --capacity 10M --allocation 10M --format raw Vol vol1 created [root@dell-per740-01 ~]# ll /dev/test-pool/vol1 lrwxrwxrwx. 1 root root 7 Mar 18 04:17 /dev/test-pool/vol1 -> ../dm-3 4. mkfs the lvm 'vol1' [root@dell-per740-01 ~]# mkfs.ext4 /dev/test-pool/vol1 mke2fs 1.45.6 (20-Mar-2020) Creating filesystem with 12288 1k blocks and 3072 inodes Filesystem UUID: ae37237c-fbeb-450b-b2c4-4872434f602a Superblock backups stored on blocks: 8193 Allocating group tables: done Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done 5. remove the lvm by 'virsh vol-delete' or 'lvremove' [root@dell-per740-01 ~]# virsh vol-delete /dev/test-pool/vol1 Vol /dev/test-pool/vol1 deleted 6. try to create a new volume, action failed due to libvirt not deal with lvcreate interactive question. [root@dell-per740-01 ~]# virsh vol-create-as --pool test-pool vol_new --capacity 10M --allocation 10M --format raw error: Failed to create vol vol_new error: internal error: Child process (/usr/sbin/lvcreate --name vol_new -L 10240K test-pool) unexpected exit status 5: WARNING: ext4 signature detected on /dev/test-pool/vol_new at offset 1080. Wipe it? [y/n]: [n] Aborted wiping of ext4. 1 existing signature left on the device. Failed to wipe signatures on logical volume test-pool/vol_new. Aborting. Failed to wipe start of new LV. Expected result: step 6 should be successful Additional info: now the 'lvcreate' will have a interactive line to let you confirm if to wipe the device's signature it it has some. adding a '--yes' will avoid this problem. pls refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1930996
forget to add lvm version: lvm2-2.03.11-5.el8.x86_64
not reproduced with: lvm2-2.03.07-1.el8.x86_64 libvirt-6.0.0-25.5.module+el8.2.1+8680+ea98947b.x86_64 the problem is caused by lvm modification, but since the libvirt has a regression failure, so set as REGRESSION for now.
This happens on my Fedora system as well. libvirt-client-7.0.0-6.fc34 lvm2-2.03.11-1.fc34
I guess we need to add "--yes" to the command in virStorageBackendLogicalLVCreate() in https://gitlab.com/libvirt/libvirt/-/blob/master/src/storage/storage_backend_logical.c
Patch proposed on the list: https://listman.redhat.com/archives/libvir-list/2021-July/msg00685.html
Merged upstream as: d91a3e96c0 storage: create logical volume with --yes option v7.5.0-242-gd91a3e96c0
Verified Version: libvirt-7.6.0-1.scrmod+el8.5.0+12133+c45b5bc2.x86_64 qemu-kvm-6.0.0-27.module+el8.5.0+12121+c40c8708.x86_64 Verified Steps: 1. Prepare a local block device. # lsscsi [2:2:0:0] disk Lenovo RAID 530-8i 5.03 /dev/sda [15:0:0:0] disk LIO-ORG device.emulated 4.0 /dev/sdb 2. Define a logical pool from /dev/sdb. # virsh pool-define-as --name test-pool --type logical --source-dev /dev/sdb Pool test-pool defined virsh pool-build test-pool --overwrite Pool test-pool built # virsh pool-start test-pool Pool test-pool started 3. Create a lvm from this pool. # virsh vol-create-as --pool test-pool vol1 --capacity 10M --allocation 10M --format raw Vol vol1 created # ll /dev/test-pool/vol1 lrwxrwxrwx. 1 root root 7 Aug 6 02:44 /dev/test-pool/vol1 -> ../dm-3 4. Mkfs the lvm ‘vol1’ # mkfs.ext4 /dev/test-pool/vol1 mke2fs 1.45.6 (20-Mar-2020) Creating filesystem with 12288 1k blocks and 3072 inodes Filesystem UUID: 341b3052-167f-4f85-927b-038b8fddd9e9 Superblock backups stored on blocks: 8193 Allocating group tables: done Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done 5. Remove the lvm ‘vol1’. # virsh vol-delete /dev/test-pool/vol1 Vol /dev/test-pool/vol1 deleted 6. Create a new volume again. # virsh vol-create-as --pool test-pool vol_new --capacity 10M --allocation 10M --format raw Vol vol_new created
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4684