Bug 2193222
| Summary: | uncaching or splitcaching write cache volumes with raid+integrity cause 'multipathd[]: libdevmapper: ioctl/libdm-iface.c' failure messages | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | LVM Team <lvm-team> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> |
| Status: | NEW --- | Docs Contact: | |
| Severity: | low | ||
| Priority: | low | CC: | agk, bmarzins, heinzm, jbrassow, msnitzer, prajnoha, teigland, zkabelac |
| Version: | 9.3 | Keywords: | Triaged |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2023-05-04 18:22:13 UTC
multipathd should be not seeing or ignore the internal lvm devices. I'm also seeing this with just writecache on raid: multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rimage_0 failed: No such device or address multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rmeta_0 failed: No such device or address multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr1_wcorig_rmeta_1 failed: No such device or address This also happens with dm-cache on raid: multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0 failed: No such device or ad> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0_imeta failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_0_iorig failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_1 failed: No such device or ad> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rimage_1_iorig failed: No such device> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rmeta_0 failed: No such device or add> multipathd[891]: libdevmapper: ioctl/libdm-iface.c(1998): device-mapper: table ioctl on ff-rr2_corig_rmeta_1 failed: No such device or add> I think this is an old issue, resulting from the fact that dm-raid devs do not include a dm uuid suffix. Adding a suffix to dm uuid's is magical way of telling blkid to ignore the device: https://github.com/util-linux/util-linux/blob/master/lib/sysfs.c#L653 (I'm not sure how multipathd is applying this logic for other dm devs that do use suffixes.) These messages occur when while multipathd is listening for dm events. When a new dm event occurs, it triggers the multipathd event polling code. The first thing this code does is get a list of all dm devices with the DM_DEVICE_LIST ioctl. Then it calls the DM_DEVICE_TABLE ioctl on each device to see if its a multipath device. If a device is removed after a dm event is triggered, between when the dm device list is populated, and when DM_DEVICE_TABLE ioctl is run on the device, the libdevmapper code will log an error. Multlipathd can work around this, but not without adding additional pointless work that it will do on all non-multipath devices. |