Bug 2193222
| Summary: | uncaching or splitcaching write cache volumes with raid+integrity cause 'multipathd[]: libdevmapper: ioctl/libdm-iface.c' failure messages | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | LVM Team <lvm-team> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED MIGRATED | Docs Contact: | |
| Severity: | low | ||
| Priority: | low | CC: | agk, bmarzins, heinzm, jbrassow, msnitzer, prajnoha, teigland, zkabelac |
| Version: | 9.3 | Keywords: | MigratedToJIRA, Triaged |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-09-23 19:02:49 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2023-05-04 18:22:13 UTC
multipathd should be not seeing or ignore the internal lvm devices. I'm also seeing this with just writecache on raid: multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rimage_0 failed: No such device or address multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rmeta_0 failed: No such device or address multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rmeta_1 failed: No such device or address This also happens with dm-cache on raid: multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0 failed: No such device or ad> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0_imeta failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0_iorig failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_1 failed: No such device or ad> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_1_iorig failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rmeta_0 failed: No such device or add> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rmeta_1 failed: No such device or add> I think this is an old issue, resulting from the fact that dm-raid devs do not include a dm uuid suffix. Adding a suffix to dm uuid's is magical way of telling blkid to ignore the device: https://github.com/util-linux/util-linux/blob/master/lib/sysfs.c#L653 (I'm not sure how multipathd is applying this logic for other dm devs that do use suffixes.) These messages occur when while multipathd is listening for dm events. When a new dm event occurs, it triggers the multipathd event polling code. The first thing this code does is get a list of all dm devices with the DM_DEVICE_LIST ioctl. Then it calls the DM_DEVICE_TABLE ioctl on each device to see if its a multipath device. If a device is removed after a dm event is triggered, between when the dm device list is populated, and when DM_DEVICE_TABLE ioctl is run on the device, the libdevmapper code will log an error. Multlipathd can work around this, but not without adding additional pointless work that it will do on all non-multipath devices. Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |