RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1309728 - ssm list is slow
Summary: ssm list is slow
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: system-storage-manager
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Lukáš Czerner
QA Contact: Filesystem QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-18 14:56 UTC by Marko Myllynen
Modified: 2016-02-24 12:20 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-24 12:20:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Marko Myllynen 2016-02-18 14:56:05 UTC
Description of problem:
# time ssm list > /dev/null 2>&1

real    0m12.105s
user    0m0.198s
sys     0m0.431s
# ssm list | wc -l
66
# 

Version-Release number of selected component (if applicable):
system-storage-mananger-0.4-5

Comment 2 Lukáš Czerner 2016-02-22 14:30:42 UTC
Hi,

thanks for the report, could you please provide the output and the duration of the following script ?

#!/bin/bash
echo 3 > /proc/sys/vm/drop_caches
echo "=== partitions"
cat /proc/partitions
echo "=== pvs"
time pvs
echo "=== vgs"
time vgs
echo "=== lvs"
time lvs
echo "=== dmsetup table"
time dmsetup table
echo "=== lsblk"
time lsblk

Also can you please provide a separate times for:

ssm list dev
ssm list pool
ssm list vol

And lastly please provide the output of:

python -m cProfile -s cumtime /bin/ssm list

Thanks!
-Lukas

Comment 3 Marko Myllynen 2016-02-24 12:20:30 UTC
I retested in the same lab now after some RHEL upgrades and rebooting the nodes, now ssm list feels instant and it takes less than one second to complete.

I think we can close this, probably a non-ssm hickup not worth investigating further here. If I see this again, I could of course provide the above information then.

Thanks.


Note You need to log in before you can comment on or make changes to this bug.