Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause: sysfs prints the fiber channel remote port dev_loss_tmo parameter as a signed integer, when in reality it is an unsigned integer and multipath was expecting an unsigned integer
Consequence: When multipath sets dev_loss_tmo to the maximum value because a device is configured to queue forever, it prints error messages because dev_loss_tmo becomes a negative number
Fix: Multipath now converts any negative number to the appropriate number.
Result: multipath can now handle signed or unsigned values for dev_loss_tmo in sysfs, and no longer prints an error message, for large dev_loss_tmo values.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (device-mapper-multipath bug fix and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2020:4540
Description of problem: When running service multipathd status, we are seeing the following: [root@storageqe-05 ~]# service multipathd status Redirecting to /bin/systemctl status multipathd.service ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2020-06-25 13:59:02 EDT; 13s ago Process: 46885 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 46883 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exited, status=0/SUCCESS) Main PID: 46887 (multipathd) Status: "up" Tasks: 7 Memory: 12.5M CGroup: /system.slice/multipathd.service └─46887 /sbin/multipathd -d -s Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-1:0-4: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-4:0-3: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-4:0-4: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: 360a98000324669436c2b45666c567869: load table [0 4194304 multipath 3 pg_init_retries 50 queue_if_no_path 1 alua 2 1 service-time 0 2 1 8:48 1 8:240 1 service-time> Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-1:0-3: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-1:0-4: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-4:0-3: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: rport-4:0-4: Cannot parse dev_loss_tmo attribute '-1' Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com multipathd[46887]: 360a98000324669436c2b45666c56786b: load table [0 4194304 multipath 3 pg_init_retries 50 queue_if_no_path 1 alua 2 1 service-time 0 2 1 8:64 1 65:0 1 service-time > Jun 25 13:59:02 storageqe-05.sqe.lab.eng.bos.redhat.com systemd[1]: Started Device-Mapper Multipath Device Controller. #cat /sys/devices/pci0000:00/0000:00:08.0/0000:0a:00.0/host1/rport-1:0-4/fc_remote_ports/rport-1:0-4/dev_loss_tmo -1 # cat /sys/class/fc_host/host1/dev_loss_tmo 30 Version-Release number of selected component (if applicable): device-mapper-multipath-0.8.4-2.el8 How reproducible: Often Steps to Reproduce: 1. Enable DM-Multipath 2. service multipathd status Actual results: Cannot parse dev_loss_tmo attribute '-1' messages in logs Expected results: no cannot parse messages are expected Additional info: We did not see this with RHEL-8.2 and device-mapper-multipath-0.8.3-3.el8