Bug 1940413 - cannot create lvm volume in logical pool if there is existing signature on that device
Summary: cannot create lvm volume in logical pool if there is existing signature on th...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.4
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: 8.4
Assignee: khanicov
QA Contact: Meina Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-18 11:47 UTC by yisun
Modified: 2021-11-16 08:21 UTC (History)
9 users (show)

Fixed In Version: libvirt-7.6.0-1.module+el8.5.0+12097+2c77910b
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 07:52:17 UTC
Type: Bug
Target Upstream Version: 7.6.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4684 0 None None None 2021-11-16 07:52:47 UTC

Description yisun 2021-03-18 11:47:49 UTC
description:
cannot create lvm volume in logical pool if there is existing signature on that device

versions:
libvirt-7.0.0-9.module+el8.4.0+10326+5e50a3b6.x86_64

How reproducible:
100%

Steps:
1. prepare a local block device (/dev/sdb in my env)
[root@dell-per740-01 ~]# lsscsi
[15:0:0:0]   disk    LIO-ORG  device.logical-  4.0   /dev/sdb

2. prepare a logical pool from /dev/sdb
[root@dell-per740-01 ~]# virsh pool-define-as --name test-pool --type logical  --source-dev /dev/sdb
Pool test-pool defined

[root@dell-per740-01 ~]# virsh pool-build test-pool --overwrite
Pool test-pool built

[root@dell-per740-01 ~]# virsh pool-start test-pool
Pool test-pool started

3. create a lvm from this pool, named 'vol1'
[root@dell-per740-01 ~]# virsh vol-create-as --pool test-pool vol1 --capacity 10M --allocation 10M --format raw
Vol vol1 created

[root@dell-per740-01 ~]# ll /dev/test-pool/vol1
lrwxrwxrwx. 1 root root 7 Mar 18 04:17 /dev/test-pool/vol1 -> ../dm-3

4. mkfs the lvm 'vol1'
[root@dell-per740-01 ~]# mkfs.ext4 /dev/test-pool/vol1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 12288 1k blocks and 3072 inodes
Filesystem UUID: ae37237c-fbeb-450b-b2c4-4872434f602a
Superblock backups stored on blocks:
        8193

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

5. remove the lvm by 'virsh vol-delete' or 'lvremove'
[root@dell-per740-01 ~]# virsh vol-delete /dev/test-pool/vol1
Vol /dev/test-pool/vol1 deleted

6. try to create a new volume, action failed due to libvirt not deal with lvcreate interactive question.
[root@dell-per740-01 ~]# virsh vol-create-as --pool test-pool vol_new --capacity 10M --allocation 10M --format raw
error: Failed to create vol vol_new
error: internal error: Child process (/usr/sbin/lvcreate --name vol_new -L 10240K test-pool) unexpected exit status 5: WARNING: ext4 signature detected on /dev/test-pool/vol_new at offset 1080. Wipe it? [y/n]: [n]
  Aborted wiping of ext4.
  1 existing signature left on the device.
  Failed to wipe signatures on logical volume test-pool/vol_new.
  Aborting. Failed to wipe start of new LV.


Expected result:
step 6 should be successful

Additional info:
now the 'lvcreate' will have a interactive line to let you confirm if to wipe the device's signature it it has some. adding a '--yes' will avoid this problem.
pls refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1930996

Comment 1 yisun 2021-03-18 11:58:31 UTC
forget to add lvm version:
lvm2-2.03.11-5.el8.x86_64

Comment 2 yisun 2021-03-18 12:03:55 UTC
not reproduced with:
lvm2-2.03.07-1.el8.x86_64
libvirt-6.0.0-25.5.module+el8.2.1+8680+ea98947b.x86_64

the problem is caused by lvm modification, but since the libvirt has a regression failure, so set as REGRESSION for now.

Comment 3 Ken Dreyer (Red Hat) 2021-07-15 16:14:32 UTC
This happens on my Fedora system as well.

libvirt-client-7.0.0-6.fc34
lvm2-2.03.11-1.fc34

Comment 4 Ken Dreyer (Red Hat) 2021-07-15 16:33:01 UTC
I guess we need to add "--yes" to the command in virStorageBackendLogicalLVCreate() in https://gitlab.com/libvirt/libvirt/-/blob/master/src/storage/storage_backend_logical.c

Comment 5 khanicov 2021-07-22 15:25:35 UTC
Patch proposed on the list:
https://listman.redhat.com/archives/libvir-list/2021-July/msg00685.html

Comment 6 khanicov 2021-07-23 10:12:09 UTC
Merged upstream as:

d91a3e96c0 storage: create logical volume with --yes option

v7.5.0-242-gd91a3e96c0

Comment 9 Meina Li 2021-08-06 06:48:59 UTC
Verified Version:
libvirt-7.6.0-1.scrmod+el8.5.0+12133+c45b5bc2.x86_64
qemu-kvm-6.0.0-27.module+el8.5.0+12121+c40c8708.x86_64

Verified Steps:
1. Prepare a local block device.
# lsscsi
[2:2:0:0]    disk    Lenovo   RAID 530-8i      5.03  /dev/sda 
[15:0:0:0]   disk    LIO-ORG  device.emulated  4.0   /dev/sdb 
2. Define a logical pool from /dev/sdb.
# virsh pool-define-as --name test-pool --type logical  --source-dev /dev/sdb
Pool test-pool defined
virsh pool-build test-pool --overwrite
Pool test-pool built
# virsh pool-start test-pool
Pool test-pool started
3. Create a lvm from this pool.
# virsh vol-create-as --pool test-pool vol1 --capacity 10M --allocation 10M --format raw
Vol vol1 created
# ll /dev/test-pool/vol1
lrwxrwxrwx. 1 root root 7 Aug  6 02:44 /dev/test-pool/vol1 -> ../dm-3
4. Mkfs the lvm ‘vol1’
# mkfs.ext4 /dev/test-pool/vol1 
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 12288 1k blocks and 3072 inodes
Filesystem UUID: 341b3052-167f-4f85-927b-038b8fddd9e9
Superblock backups stored on blocks: 
	8193
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
5. Remove the lvm ‘vol1’.
# virsh vol-delete /dev/test-pool/vol1
Vol /dev/test-pool/vol1 deleted
6. Create a new volume again.
# virsh vol-create-as --pool test-pool vol_new --capacity 10M --allocation 10M --format raw
Vol vol_new created

Comment 11 errata-xmlrpc 2021-11-16 07:52:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684


Note You need to log in before you can comment on or make changes to this bug.