Bug 984781 - connection4:0: detected conn error connection4:0: ping timeout of 5 secs expired
connection4:0: detected conn error connection4:0: ping timeout of 5 secs expired
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: rgmanager (Show other bugs)
6.2
x86_64 Linux
unspecified Severity urgent
: rc
: ---
Assigned To: Ryan McCabe
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-15 22:37 EDT by kevin_cai
Modified: 2013-09-12 08:19 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-12 08:19:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
cluster.conf (2.59 KB, text/plain)
2013-07-16 21:46 EDT, kevin_cai
no flags Details
node 1:/var/log/messages (12.20 KB, text/plain)
2013-07-16 21:51 EDT, kevin_cai
no flags Details
node 2:/var/log/messages (14.17 KB, text/plain)
2013-07-16 22:04 EDT, kevin_cai
no flags Details

  None (edit)
Description kevin_cai 2013-07-15 22:37:36 EDT
Description of problem:

rhcs failover  failed.

#tail -1000 /var/log/message
Jul 11 14:41:31 test kernel: connection3:0: ping timeout of 5 secs expired, recv timeout 5, last rx 6004337700, last ping 6004342700, now 600434
7700
Jul 11 14:41:31 test kernel: connection3:0: detected conn error (1011)
Jul 11 14:41:32 test iscsid: Kernel reported iSCSI connection 3:0 error (1011) state (3)
Jul 11 14:41:35 test kernel: connection4:0: ping timeout of 5 secs expired, recv timeout 5, last rx 6004341701, last ping 6004346701, now 600435
1701
Jul 11 14:41:35 test kernel: connection4:0: detected conn error (1011)
Jul 11 14:41:36 test iscsid: Kernel reported iSCSI connection 4:0 error (1011) state (3)
Jul 11 14:43:31 test kernel: session3: session recovery timed out after 120 secs
Jul 11 14:43:32 test multipathd: nebula: sdd - directio checker reports path is down
Jul 11 14:43:32 test multipathd: checker failed path 8:48 in map nebula
Jul 11 14:43:32 test multipathd: nebula: remaining active paths: 3
Jul 11 14:43:32 test kernel: sd 9:0:0:0: [sdd] Unhandled error code
Jul 11 14:43:32 test kernel: sd 9:0:0:0: [sdd] Result: hostbyte=DID_TRANSPORT_FAILFAST driverbyte=DRIVER_OK
Jul 11 14:43:32 test kernel: sd 9:0:0:0: [sdd] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00
Jul 11 14:43:32 test kernel: device-mapper: multipath: Failing path 8:48.
Jul 11 14:43:35 test kernel: session4: session recovery timed out after 120 secs
Jul 11 14:43:36 test multipathd: nebula: sde - directio checker reports path is down
Jul 11 14:43:36 test kernel: sd 10:0:0:0: [sde] Unhandled error code.

but i see port 3260 is established.
# netstat -an|grep 3260
tcp        0      0 10.11.200.7:36824           10.11.100.18:3260           ESTABLISHED 
tcp        0      0 10.11.200.7:42613           10.11.100.19:3260           ESTABLISHED 
tcp        0      0 10.11.200.7:54683           10.11.100.17:3260           ESTABLISHED 
tcp        0      0 10.11.200.7:57277           10.11.100.16:3260           ESTABLISHED 


Version-Release number of selected component (if applicable):

# rpm -qa|grep multipath
device-mapper-multipath-0.4.9-46.el6.x86_64
device-mapper-multipath-libs-0.4.9-46.el6.x86_64
# 
# rpm -qa|grep iscsi
iscsi-initiator-utils-6.2.0.872-34.el6.x86_64
# rpm -qa|grep cman
cman-3.0.12.1-23.el6.x86_64
# rpm -qa|grep rgmanager
rgmanager-3.0.12.1-5.el6.x86_64

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Ryan McCabe 2013-07-16 11:35:53 EDT
We need at least your cluster.conf and relevant cluster logs at the time of the failure to troubleshoot this.
Comment 3 kevin_cai 2013-07-16 21:46:51 EDT
Created attachment 774587 [details]
cluster.conf

cluster.conf
Comment 4 kevin_cai 2013-07-16 21:51:49 EDT
Created attachment 774588 [details]
node 1:/var/log/messages

node 1:/var/log/messages
Comment 5 kevin_cai 2013-07-16 22:04:24 EDT
Created attachment 774589 [details]
node 2:/var/log/messages

node 2:/var/log/messages
Comment 6 Fabio Massimo Di Nitto 2013-09-12 08:19:49 EDT
The logs clearly show that both nodes are experiencing storage issue.

node NFJD-PSC-SGM-SV22 stops the service as soon as it detects the failure, the other node attempts to start the service a few seconds later, but the storage is also failing. Logs are then flooded with mpath entries and we can't see what's happening after that.

Note You need to log in before you can comment on or make changes to this bug.