RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 964314 - Today's lvm build (.34) appears to be considerably slower then the build from 2 days ago (.32)
Summary: Today's lvm build (.34) appears to be considerably slower then the build from...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-17 20:43 UTC by Corey Marthaler
Modified: 2023-03-08 07:25 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-05-17 21:25:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2013-05-17 20:43:45 UTC
Description of problem:
I got annoyed today waiting for cmds to finish, so I timed a few of them, and then went back to the build from two days ago and did the same with it.


RPMS (the "slow" build):
3.8.0-0.40.el7.x86_64
lvm2-2.02.99-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
lvm2-libs-2.02.99-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
lvm2-cluster-2.02.99-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
device-mapper-1.02.78-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
device-mapper-libs-1.02.78-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
device-mapper-event-1.02.78-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
device-mapper-event-libs-1.02.78-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013
cmirror-2.02.99-0.34.el7    BUILT: Thu May 16 19:28:08 CDT 2013


[root@qalvm-01 ~]# time pvscan
  PV /dev/sda2   VG rhel_qalvm-01   lvm2 [24.51 GiB / 0    free]
  Total: 1 [24.51 GiB] / in use: 1 [24.51 GiB] / in no VG: 0 [0   ]
real    0m2.225s
user    0m0.005s
sys     0m0.011s

[root@qalvm-01 ~]# time pvcreate /dev/vd[abcdefgh]1
  Physical volume "/dev/vda1" successfully created
  Physical volume "/dev/vdb1" successfully created
  Physical volume "/dev/vdc1" successfully created
  Physical volume "/dev/vdd1" successfully created
  Physical volume "/dev/vde1" successfully created
  Physical volume "/dev/vdf1" successfully created
  Physical volume "/dev/vdg1" successfully created
  Physical volume "/dev/vdh1" successfully created
real    0m11.150s
user    0m0.003s
sys     0m0.015s

[root@qalvm-01 ~]# time vgcreate TEST /dev/vd[abcdefgh]1
  Volume group "TEST" successfully created
real    0m9.277s
user    0m0.054s
sys     0m0.034s

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume TEST-pool-tpool (253:4)
  Aborting. Failed to activate thin pool.
real    0m12.321s
user    0m0.079s
sys     0m0.112s

[root@qalvm-01 ~]# time lvs -a -o +devices
  LV   VG            Attr      LSize  Devices        
  root rhel_qalvm-01 -wi-ao--- 20.57g /dev/sda2(1008)
  swap rhel_qalvm-01 -wi-ao---  3.94g /dev/sda2(0)   
real    0m1.084s
user    0m0.012s
sys     0m0.011s

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  device-mapper: create ioctl on TEST-pool_tmeta failed: Device or resource busy
  Aborting. Failed to activate thin pool.
real    0m9.178s
user    0m0.068s
sys     0m0.085s

[root@qalvm-01 ~]# dmsetup ls
rhel_qalvm--01-swap     (253:0)
rhel_qalvm--01-root     (253:1)
TEST-pool_tdata (253:3)
TEST-pool_tmeta (253:2)

[root@qalvm-01 ~]# dmsetup remove TEST-pool_tdata
[root@qalvm-01 ~]# dmsetup remove TEST-pool_tmeta

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  Logical volume "pool" created
real    0m5.403s
user    0m0.076s
sys     0m0.080s



RPMS (the "faster" build):
3.8.0-0.40.el7.x86_64
lvm2-2.02.99-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
lvm2-libs-2.02.99-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
lvm2-cluster-2.02.99-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
device-mapper-1.02.78-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
device-mapper-libs-1.02.78-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
device-mapper-event-1.02.78-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
device-mapper-event-libs-1.02.78-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013
cmirror-2.02.99-0.32.el7    BUILT: Wed May 15 08:28:08 CDT 2013


[root@qalvm-01 ~]# time pvscan
  PV /dev/sda2   VG rhel_qalvm-01   lvm2 [24.51 GiB / 0    free]
  Total: 1 [24.51 GiB] / in use: 1 [24.51 GiB] / in no VG: 0 [0   ]
real    0m1.927s
user    0m0.008s
sys     0m0.006s

[root@qalvm-01 ~]# time pvcreate /dev/vd[abcdefgh]1
  Physical volume "/dev/vda1" successfully created
  Physical volume "/dev/vdb1" successfully created
  Physical volume "/dev/vdc1" successfully created
  Physical volume "/dev/vdd1" successfully created
  Physical volume "/dev/vde1" successfully created
  Physical volume "/dev/vdf1" successfully created
  Physical volume "/dev/vdg1" successfully created
  Physical volume "/dev/vdh1" successfully created
real    0m5.944s
user    0m0.006s
sys     0m0.014s

[root@qalvm-01 ~]# time vgcreate TEST /dev/vd[abcdefgh]1
  Volume group "TEST" successfully created
real    0m7.093s
user    0m0.058s
sys     0m0.033s

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume TEST-pool-tpool (253:4)
  Aborting. Failed to activate thin pool.
real    0m6.572s
user    0m0.071s
sys     0m0.115s

[root@qalvm-01 ~]# time lvs -a -o +devices
  LV   VG            Attr      LSize  Devices        
  root rhel_qalvm-01 -wi-ao--- 20.57g /dev/sda2(1008)
  swap rhel_qalvm-01 -wi-ao---  3.94g /dev/sda2(0)   
real    0m5.557s
user    0m0.010s
sys     0m0.016s

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  device-mapper: create ioctl on TEST-pool_tmeta failed: Device or resource busy
  Aborting. Failed to activate thin pool.
real    0m6.156s
user    0m0.070s
sys     0m0.087s

[root@qalvm-01 ~]# dmsetup ls
rhel_qalvm--01-swap     (253:0)
rhel_qalvm--01-root     (253:1)
TEST-pool_tdata (253:3)
TEST-pool_tmeta (253:2)

[root@qalvm-01 ~]# dmsetup remove TEST-pool_tdata
[root@qalvm-01 ~]# dmsetup remove TEST-pool_tmeta

[root@qalvm-01 ~]# time lvcreate --thinpool pool -L 100M TEST
  device-mapper: resume ioctl on  failed: Invalid argument
  Unable to resume TEST-pool-tpool (253:4)
  Aborting. Failed to activate thin pool.
real    0m11.082s
user    0m0.072s
sys     0m0.091s

Comment 1 Corey Marthaler 2013-05-17 21:25:07 UTC
Looks like there were many left over dm devices on others nodes that had been using the same storage.


Note You need to log in before you can comment on or make changes to this bug.