Bug 1940413
Summary: | cannot create lvm volume in logical pool if there is existing signature on that device | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | yisun |
Component: | libvirt | Assignee: | khanicov |
Status: | CLOSED ERRATA | QA Contact: | Meina Li <meili> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 8.4 | CC: | jdenemar, kdreyer, lmen, meili, mprivozn, pkrempa, virt-maint, xuzhang, yisun |
Target Milestone: | rc | Keywords: | Regression, Triaged, Upstream |
Target Release: | 8.4 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-7.6.0-1.module+el8.5.0+12097+2c77910b | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-11-16 07:52:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | 7.6.0 |
Embargoed: |
Description
yisun
2021-03-18 11:47:49 UTC
forget to add lvm version: lvm2-2.03.11-5.el8.x86_64 not reproduced with: lvm2-2.03.07-1.el8.x86_64 libvirt-6.0.0-25.5.module+el8.2.1+8680+ea98947b.x86_64 the problem is caused by lvm modification, but since the libvirt has a regression failure, so set as REGRESSION for now. This happens on my Fedora system as well. libvirt-client-7.0.0-6.fc34 lvm2-2.03.11-1.fc34 I guess we need to add "--yes" to the command in virStorageBackendLogicalLVCreate() in https://gitlab.com/libvirt/libvirt/-/blob/master/src/storage/storage_backend_logical.c Patch proposed on the list: https://listman.redhat.com/archives/libvir-list/2021-July/msg00685.html Merged upstream as: d91a3e96c0 storage: create logical volume with --yes option v7.5.0-242-gd91a3e96c0 Verified Version: libvirt-7.6.0-1.scrmod+el8.5.0+12133+c45b5bc2.x86_64 qemu-kvm-6.0.0-27.module+el8.5.0+12121+c40c8708.x86_64 Verified Steps: 1. Prepare a local block device. # lsscsi [2:2:0:0] disk Lenovo RAID 530-8i 5.03 /dev/sda [15:0:0:0] disk LIO-ORG device.emulated 4.0 /dev/sdb 2. Define a logical pool from /dev/sdb. # virsh pool-define-as --name test-pool --type logical --source-dev /dev/sdb Pool test-pool defined virsh pool-build test-pool --overwrite Pool test-pool built # virsh pool-start test-pool Pool test-pool started 3. Create a lvm from this pool. # virsh vol-create-as --pool test-pool vol1 --capacity 10M --allocation 10M --format raw Vol vol1 created # ll /dev/test-pool/vol1 lrwxrwxrwx. 1 root root 7 Aug 6 02:44 /dev/test-pool/vol1 -> ../dm-3 4. Mkfs the lvm ‘vol1’ # mkfs.ext4 /dev/test-pool/vol1 mke2fs 1.45.6 (20-Mar-2020) Creating filesystem with 12288 1k blocks and 3072 inodes Filesystem UUID: 341b3052-167f-4f85-927b-038b8fddd9e9 Superblock backups stored on blocks: 8193 Allocating group tables: done Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done 5. Remove the lvm ‘vol1’. # virsh vol-delete /dev/test-pool/vol1 Vol /dev/test-pool/vol1 deleted 6. Create a new volume again. # virsh vol-create-as --pool test-pool vol_new --capacity 10M --allocation 10M --format raw Vol vol_new created Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4684 |