RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1647567 - dmstats list --group "Command failed."
Summary: dmstats list --group "Command failed."
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: rc
: ---
Assignee: Bryn M. Reeves
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-07 19:44 UTC by Corey Marthaler
Modified: 2021-09-03 12:56 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.186-3.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-31 20:04:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1129 0 None None None 2020-03-31 20:05:27 UTC

Description Corey Marthaler 2018-11-07 19:44:17 UTC
Description of problem:
I'm not sure if this is a regression of bug 1591504/bug 1596129 or if it's a different issue all together in the latest 7.6.z rpms.


[root@host-086 ~]# dmstats list --group
Command failed.
Name             GrpID RgID ObjType RgStart RgSize  #Areas ArSize  ProgID 
host-086_load        0    0 group     9.00m 977.00k      1 977.00k dmstats


[root@host-086 ~]# dmstats list --histogram
Name                                    GrpID RgID ObjType RgStart RgSize  #Areas ArSize  #Bins     Histogram Bounds
rhel_host--086-pool00_tmeta                 -    0 region        0   4.00m      1   4.00m 0                         
rhel_host--086-swap                         -    0 region        0 820.00m      1 820.00m 0                         
rhel_host--086-root                         -    0 region        0   4.79g      1   4.79g 0                         
rhel_host--086-pool00                       -    0 region        0   4.79g      1   4.79g 0                         
host-086_load                               0    0 region    9.00m 977.00k      1 977.00k 0                         
black_bird-synced_primary_raid1_2legs_1     -    1 region        0 500.00m      1 500.00m 0                         
host-086_load                               0    0 group     9.00m 977.00k      1 977.00k 0                         
rhel_host--086-pool00-tpool                 -    0 region        0   4.79g      1   4.79g 0                         
rhel_host--086-pool00_tdata                 -    0 region        0   4.79g      1   4.79g 0                         


[root@host-086 ~]# dmstats -vvv list --group
Timestamp:       0.000000000 seconds
dm version   [ opencount flush ]   [16384] (*1)
dm names   [ opencount flush ]   [16384] (*1)
Timestamp:       0.001539551 seconds
dm info rhel_host--086-pool00_tmeta  [ opencount flush ]   [16384] (*1)
Timestamp:       0.001758625 seconds
dm info   (253:0) [ opencount flush ]   [16384] (*1)
dm message   (253:0) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info black_bird-synced_primary_raid1_2legs_1_rmeta_1  [ opencount flush ]   [16384] (*1)
Timestamp:       0.002656434 seconds
dm info   (253:8) [ opencount flush ]   [16384] (*1)
dm message   (253:8) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info rhel_host--086-swap  [ opencount flush ]   [16384] (*1)
Timestamp:       0.002792892 seconds
dm info   (253:4) [ opencount flush ]   [16384] (*1)
dm message   (253:4) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info rhel_host--086-root  [ opencount flush ]   [16384] (*1)
Timestamp:       0.003067131 seconds
dm info   (253:3) [ opencount flush ]   [16384] (*1)
dm message   (253:3) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info black_bird-synced_primary_raid1_2legs_1_rmeta_0  [ opencount flush ]   [16384] (*1)
Timestamp:       0.003409239 seconds
dm info   (253:6) [ opencount flush ]   [16384] (*1)
dm message   (253:6) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info rhel_host--086-pool00  [ opencount flush ]   [16384] (*1)
Timestamp:       0.003614900 seconds
dm info   (253:5) [ opencount flush ]   [16384] (*1)
dm message   (253:5) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info black_bird-bb_snap1  [ opencount flush ]   [16384] (*1)
Timestamp:       0.003881869 seconds
dm info   (253:13) [ opencount flush ]   [16384] (*1)
dm message   (253:13) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info black_bird-synced_primary_raid1_2legs_1  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004009990 seconds
dm info   (253:10) [ opencount flush ]   [16384] (*1)
dm message   (253:10) [ opencount flush ]  @stats_list dmstats [16384] (*1)
Read alias 'host-086_load' from aux_data
Found group_id 0: alias="host-086_load"
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info black_bird-bb_snap1-cow  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004388195 seconds
dm info   (253:12) [ opencount flush ]   [16384] (*1)
dm message   (253:12) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info black_bird-synced_primary_raid1_2legs_1_rimage_1  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004579367 seconds
dm info   (253:9) [ opencount flush ]   [16384] (*1)
dm message   (253:9) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info black_bird-synced_primary_raid1_2legs_1-real  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004708772 seconds
dm info   (253:11) [ opencount flush ]   [16384] (*1)
dm message   (253:11) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info black_bird-synced_primary_raid1_2legs_1_rimage_0  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004836639 seconds
dm info   (253:7) [ opencount flush ]   [16384] (*1)
dm message   (253:7) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm info rhel_host--086-pool00-tpool  [ opencount flush ]   [16384] (*1)
Timestamp:       0.004963073 seconds
dm info   (253:2) [ opencount flush ]   [16384] (*1)
dm message   (253:2) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
dm info rhel_host--086-pool00_tdata  [ opencount flush ]   [16384] (*1)
Timestamp:       0.005251631 seconds
dm info   (253:1) [ opencount flush ]   [16384] (*1)
dm message   (253:1) [ opencount flush ]  @stats_list dmstats [16384] (*1)
dm_stats_walk_init: initialised flags to 4000000000000
starting stats walk with   GROUP 
dm_stats_walk_init: initialised flags to f000000000000
starting stats walk with AREA REGION GROUP SKIP
Name                                    GrpID RgID ObjType RgStart RgSize  #Areas ArID ArStart ArSize  ProgID 
rhel_host--086-pool00_tmeta                 -    0 area          0   4.00m      1    0       0   4.00m dmstats
rhel_host--086-swap                         -    0 area          0 820.00m      1    0       0 820.00m dmstats
rhel_host--086-root                         -    0 area          0   4.79g      1    0       0   4.79g dmstats
rhel_host--086-pool00                       -    0 area          0   4.79g      1    0       0   4.79g dmstats
host-086_load                               0    0 area      9.00m 977.00k      1    0   9.00m 977.00k dmstats
black_bird-synced_primary_raid1_2legs_1     -    1 area          0 500.00m      1    0       0 500.00m dmstats
host-086_load                               0    0 group     9.00m 977.00k      1    0   9.00m 977.00k dmstats
rhel_host--086-pool00-tpool                 -    0 area          0   4.79g      1    0       0   4.79g dmstats
rhel_host--086-pool00_tdata                 -    0 area          0   4.79g      1    0       0   4.79g dmstats




Version-Release number of selected component (if applicable):
3.10.0-957.el7.x86_64

lvm2-2.02.180-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
lvm2-libs-2.02.180-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
lvm2-cluster-2.02.180-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
lvm2-lockd-2.02.180-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
lvm2-python-boom-0.9-11.el7    BUILT: Mon Sep 10 04:49:22 CDT 2018
cmirror-2.02.180-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
device-mapper-1.02.149-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
device-mapper-libs-1.02.149-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
device-mapper-event-1.02.149-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
device-mapper-event-libs-1.02.149-10.el7_6.2    BUILT: Wed Oct 31 03:55:58 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017

Comment 2 Bryn M. Reeves 2019-09-30 16:13:12 UTC
This is because _display_info_cols() is returning zero in the case of a device that has at least one region present, but no groups, when the walk flags are set to DM_STATS_WALK_GROUP (walk groups only):

        /* group report with no groups? */
        if ((walk_flags == DM_STATS_WALK_GROUP)
            && !dm_stats_get_nr_groups(obj.stats))
                goto out;

To reproduce you just need at least one device with one or more regions and no groups:

# dmstats create rhel/root
rhel-root: Created new region with 1 area(s) as region ID 0

# dmstats list --group
Command failed.

The fix is to correctly set r=1 before jumping to 'out' in _display_info_cols() (as we already do in the case of there being no regions to report):

        /* group report with no groups? */
        if ((walk_flags == DM_STATS_WALK_GROUP)
            && !dm_stats_get_nr_groups(obj.stats)) {
                r = 1;
                goto out;
        }

Comment 3 Bryn M. Reeves 2019-09-30 16:13:37 UTC
Fixed in master & stable-2.02:

Author: Bryn M. Reeves <bmr>
Date:   Mon Sep 30 17:10:05 2019 +0100

    dmsetup: do not treat no groups as an error in dmstats list --group
    
    Analogous to the case of a device with no regions, it is not an
    error to attempt to list the stats groups on a device that has no
    configured groups: just return success and continue.

Comment 6 Corey Marthaler 2019-11-08 18:04:12 UTC
This appears fixed again in the latest rpms.

3.10.0-1109.el7.x86_64

lvm2-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-libs-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-cluster-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-lockd-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
lvm2-python-boom-0.9-20.el7    BUILT: Tue Sep 24 06:18:20 CDT 2019
cmirror-2.02.186-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-event-libs-1.02.164-3.el7    BUILT: Fri Nov  8 07:07:01 CST 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019



[root@hayes-01 ~]# dmstats report
Name                                    GrpID RgID ObjType ArID ArStart ArSize  RMrg/s WMrg/s R/s  W/s    RSz/s WSz/s  AvgRqSz QSize Util%  AWait RdAWait WrAWait
hayes-01_load                               0    0 area       0  48.00m 977.00k   0.00   0.00 0.00  40.00     0  9.54m 244.00k  0.68  22.10 17.02    0.00   17.02
black_bird-synced_primary_raid1_2legs_1     -    1 area       0       0 500.00m   0.00   0.00 0.00 170.00     0 13.62m  82.00k  3.10 100.00 18.26    0.00   18.26
hayes-01_load                               0    0 group      0  48.00m 977.00k   0.00   0.00 0.00  40.00     0  9.54m 244.00k  0.68  22.10 17.02    0.00   17.02

[root@hayes-01 ~]# dmstats list --group
Name             GrpID RgID ObjType RgStart RgSize  #Areas ArSize  ProgID 
hayes-01_load        0    0 group    48.00m 977.00k      1 977.00k dmstats

Comment 8 errata-xmlrpc 2020-03-31 20:04:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129


Note You need to log in before you can comment on or make changes to this bug.