Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionCorey Marthaler
2017-10-26 15:19:51 UTC
Description of problem:
[root@host-005 ~]# dmsetup info raid_luksvolume
Name: raid_luksvolume
State: ACTIVE
Read Ahead: 8192
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 7
Number of targets: 1
UUID: CRYPT-LUKS1-9d6097bf347241458a6208ee29eb5b6d-raid_luksvolume
[root@host-005 ~]# dmsetup info foobar
Device does not exist.
Command failed
[root@host-005 ~]# dmsetup -v status raid_sanity-open_LUKS_fsadm_resize
Name: raid_sanity-open_LUKS_fsadm_resize
State: ACTIVE
Read Ahead: 8192
Tables present: LIVE
Open count: 1
Event number: 2
Major, minor: 253, 6
Number of targets: 1
UUID: LVM-xm7k9a6xRfg8Tndqt3rdAHgpHUFFsjmjb3rBeXnTFq868cP9YEu724T3VPwmpdqS
0 12582912 raid raid1 2 AA 12582912/12582912 idle 0 0 -
[root@host-005 ~]# dmsetup status raid_sanity-open_LUKS_fsadm_resize
0 12582912 raid raid1 2 AA 12582912/12582912 idle 0 0 -
[root@host-005 ~]# lvs foobar
Volume group "foobar" not found
# It would be more user friendly to provide a "does not exist or "not found"
[root@host-005 ~]# dmsetup -v status foobar
Command failed
[root@host-005 ~]# dmsetup status foobar
Command failed
Version-Release number of selected component (if applicable):
lvm2-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
lvm2-libs-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
lvm2-cluster-2.02.175-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
device-mapper-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
device-mapper-libs-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
device-mapper-event-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
device-mapper-event-libs-1.02.144-3.el7 BUILT: Wed Oct 25 02:03:21 CDT 2017
device-mapper-persistent-data-0.7.3-2.el7 BUILT: Tue Oct 10 04:00:07 CDT 2017
before:
# dmsetup status fffff
Command failed
after:
# ./dmsetup status fffff
Device does not exist.
Command failed
Compare:
# dmsetup table fffff
device-mapper: table ioctl on fffff failed: No such device or address
Command failed
The library treats INFO and STATUS the same way, not as an error, so the dmsetup calling code should handle the two the same way as each other too.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2018:0853