Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
thin pool stress test is producing dmeventd errors
============================================================
Iteration 1 of 1 started at Wed Aug 7 18:06:08 CEST 2019
============================================================
SCENARIO - [many_thin_snaps]
Create 1200 virt snapshots of an origin volume and create PVs on each
Enabling LV scanning
Recreating VG and PVs to increase metadata size
Making pool volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate --type raid10 -m 1 -i 2 --zero n -L 100M -n meta snapper_thinp
WARNING: Logical volume snapper_thinp/meta not zeroed.
lvcreate --type raid10 -m 1 -i 2 --zero n -L 2G -n POOL snapper_thinp
WARNING: Logical volume snapper_thinp/POOL not zeroed.
Waiting until all mirror|raid volumes become fully syncd...
2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
Sleeping 15 sec
Sleeping 15 sec
lvconvert --zero n --thinpool snapper_thinp/POOL --poolmetadata meta --yes
WARNING: Converting snapper_thinp/POOL and snapper_thinp/meta to thin pool's data and metadata volumes with metadata wiping.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
device-mapper: remove ioctl on (253:10) failed: Device or resource busy
Sanity checking pool device (POOL) metadata
thin_check /dev/mapper/snapper_thinp-meta_swap.632
examining superblock
examining devices tree
examining mapping tree
checking space map counts
Making origin volume
lvcreate --virtualsize 2G -T snapper_thinp/POOL -n origin
lvcreate -V 2G -T snapper_thinp/POOL -n other1
WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB).
lvcreate -V 2G -T snapper_thinp/POOL -n other2
WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB).
lvcreate -V 2G -T snapper_thinp/POOL -n other3
WARNING: Sum of all thin volume sizes (8.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB).
lvcreate --virtualsize 2G -T snapper_thinp/POOL -n other4
WARNING: Sum of all thin volume sizes (10.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB).
lvcreate --virtualsize 2G -T snapper_thinp/POOL -n other5
WARNING: Sum of all thin volume sizes (12.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB).
Making 1200 snapshots of origin volume
1 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1
2 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_2
3 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_3
.
.
.
.
.
.
1005 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1005
1006 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1006
bcache no new blocks for fd 183 index 16383
1007 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1007
1008 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1008
bcache no new blocks for fd 686 index 16383
1009 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1009
1010 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1010
Although the snap create passed, errors were found in it's output
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
WARNING: Sum of all thin volume sizes (1.98 TiB) exceeds the size of thin pool snapper_thinp/POOL and the size of whole volume group (<206.53 GiB).
/run/dmeventd-client: open failed: Too many open files
WARNING: Failed to monitor snapper_thinp/POOL_tdata.
/run/dmeventd-client: open failed: Too many open files
WARNING: Failed to monitor snapper_thinp/POOL_tmeta.
/run/dmeventd-client: open failed: Too many open files
WARNING: Failed to monitor snapper_thinp/POOL.
Logical volume "many_1010" created.
kernel-4.18.0-128.el8.x86_64
lvm2-2.03.05-2.el8.x86_64
device-mapper-1.02.163-2.el8.x86_64
thin pool stress test is producing dmeventd errors ============================================================ Iteration 1 of 1 started at Wed Aug 7 18:06:08 CEST 2019 ============================================================ SCENARIO - [many_thin_snaps] Create 1200 virt snapshots of an origin volume and create PVs on each Enabling LV scanning Recreating VG and PVs to increase metadata size Making pool volume Converting *Raid* volumes to thin pool and thin pool metadata devices lvcreate --type raid10 -m 1 -i 2 --zero n -L 100M -n meta snapper_thinp WARNING: Logical volume snapper_thinp/meta not zeroed. lvcreate --type raid10 -m 1 -i 2 --zero n -L 2G -n POOL snapper_thinp WARNING: Logical volume snapper_thinp/POOL not zeroed. Waiting until all mirror|raid volumes become fully syncd... 2/2 mirror(s) are fully synced: ( 100.00% 100.00% ) Sleeping 15 sec Sleeping 15 sec lvconvert --zero n --thinpool snapper_thinp/POOL --poolmetadata meta --yes WARNING: Converting snapper_thinp/POOL and snapper_thinp/meta to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) device-mapper: remove ioctl on (253:10) failed: Device or resource busy Sanity checking pool device (POOL) metadata thin_check /dev/mapper/snapper_thinp-meta_swap.632 examining superblock examining devices tree examining mapping tree checking space map counts Making origin volume lvcreate --virtualsize 2G -T snapper_thinp/POOL -n origin lvcreate -V 2G -T snapper_thinp/POOL -n other1 WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB). lvcreate -V 2G -T snapper_thinp/POOL -n other2 WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB). lvcreate -V 2G -T snapper_thinp/POOL -n other3 WARNING: Sum of all thin volume sizes (8.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB). lvcreate --virtualsize 2G -T snapper_thinp/POOL -n other4 WARNING: Sum of all thin volume sizes (10.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB). lvcreate --virtualsize 2G -T snapper_thinp/POOL -n other5 WARNING: Sum of all thin volume sizes (12.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (2.00 GiB). Making 1200 snapshots of origin volume 1 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1 2 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_2 3 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_3 . . . . . . 1005 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1005 1006 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1006 bcache no new blocks for fd 183 index 16383 1007 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1007 1008 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1008 bcache no new blocks for fd 686 index 16383 1009 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1009 1010 lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_1010 Although the snap create passed, errors were found in it's output WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. WARNING: Sum of all thin volume sizes (1.98 TiB) exceeds the size of thin pool snapper_thinp/POOL and the size of whole volume group (<206.53 GiB). /run/dmeventd-client: open failed: Too many open files WARNING: Failed to monitor snapper_thinp/POOL_tdata. /run/dmeventd-client: open failed: Too many open files WARNING: Failed to monitor snapper_thinp/POOL_tmeta. /run/dmeventd-client: open failed: Too many open files WARNING: Failed to monitor snapper_thinp/POOL. Logical volume "many_1010" created. kernel-4.18.0-128.el8.x86_64 lvm2-2.03.05-2.el8.x86_64 device-mapper-1.02.163-2.el8.x86_64