Hide Forgot
Description of problem: (*Found using vdsm) after many successful logins to the same target, one login command just hangs forever: Thread-1873::DEBUG::2011-09-04 14:02:33,847::iscsi::373::Storage.Misc.excCmd::(addiSCSIPortal) '/usr/bin/sudo -n /sbin/iscsiadm -m discoverydb -t sendtargets -p 10.35.64.25:3260 --discover' (cwd None) Thread-1873::DEBUG::2011-09-04 14:02:33,980::iscsi::373::Storage.Misc.excCmd::(addiSCSIPortal) SUCCESS: <err> = ''; <rc> = 0 Thread-1873::DEBUG::2011-09-04 14:02:33,981::iscsi::471::Storage.Misc.excCmd::(addiSCSINode) '/usr/bin/sudo -n /sbin/iscsiadm -m node -T RUTH1 -l -p 10.35.64.25:3260' (cwd None) [root@pink-vds2 ~]# iscsiadm -m session iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session16 iscsiadm: No active sessions. from /var/log/messages: Sep 4 14:02:33 pink-vds2 kernel: NFS: Cache request denied due to non-unique superblock keys Sep 4 14:02:34 pink-vds2 kernel: scsi18 : iSCSI Initiator over TCP/IP Sep 4 14:02:35 pink-vds2 iscsid: Could not set session16 priority. READ/WRITE throughout and latency could be affected. Sep 4 14:02:35 pink-vds2 iscsid: Received iferror -22: Invalid argument. Sep 4 14:02:35 pink-vds2 iscsid: Can't create connection. Sep 4 14:02:35 pink-vds2 iscsid: Received iferror -22: Invalid argument. Sep 4 14:02:35 pink-vds2 iscsid: can not safely destroy connection 0 Sep 4 14:02:50 pink-vds2 iscsid: Received iferror -22: Invalid argument. Sep 4 14:02:50 pink-vds2 iscsid: can't bind conn 16:0 to session 16, retcode 1 (2) [root@pink-vds2 ~]# cat /var/lib/iscsi/nodes/RUTH1/10.35.64.25,3260,1/default # BEGIN RECORD 2.0-872 node.name = RUTH1 node.tpgt = 1 node.startup = manual node.leading_login = No iface.iscsi_ifacename = default iface.transport_name = tcp node.discovery_address = 10.35.64.25 node.discovery_port = 3260 node.discovery_type = send_targets node.session.initial_cmdsn = 0 node.session.initial_login_retry_max = 4 node.session.xmit_thread_priority = -20 node.session.cmds_max = 128 node.session.queue_depth = 32 node.session.nr_sessions = 1 node.session.auth.authmethod = None node.session.timeo.replacement_timeout = 120 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 30 node.session.err_timeo.tgt_reset_timeout = 30 node.session.err_timeo.host_reset_timeout = 60 node.session.iscsi.FastAbort = Yes node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.session.iscsi.DefaultTime2Retain = 0 node.session.iscsi.DefaultTime2Wait = 2 node.session.iscsi.MaxConnections = 1 node.session.iscsi.MaxOutstandingR2T = 1 node.session.iscsi.ERL = 0 node.conn[0].address = 10.35.64.25 node.conn[0].port = 3260 node.conn[0].startup = manual node.conn[0].tcp.window_size = 524288 node.conn[0].tcp.type_of_service = 0 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.auth_timeout = 45 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.conn[0].iscsi.MaxXmitDataSegmentLength = 0 node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072 node.conn[0].iscsi.HeaderDigest = None node.conn[0].iscsi.IFMarker = No node.conn[0].iscsi.OFMarker = No # END RECORD Found out that it can be reproduceable with repeated discover -> login-> disconnect: while true; do /sbin/iscsiadm -m discoverydb -t sendtargets -p 10.35.64.25:3260 --discover; iscsiadm -m node -T TARGET1 -l ; iscsiadm -m node -T TARGET -u; done After some successful logins, one just hangs forever. Version-Release number of selected component (if applicable): iscsi-initiator-utils-6.2.0.872-24.el6.x86_64 Additional info: Attach strace of the login command.
This is a regression that was added in iscsi utils .22 that we made for rhel 6.2. I am working on it here https://bugzilla.redhat.com/show_bug.cgi?id=736116. It is a result of some patches from qlogic to add support for their cards, but the bug affects all iscsi modules. I hope to have a fix in a couple days.
*** This bug has been marked as a duplicate of bug 736116 ***