Bug 1460274 - glusterd log shows glusterd graph loading with transport type as rdma on restart of glusterd
glusterd log shows glusterd graph loading with transport type as rdma on rest...
Status: NEW
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rdma (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Mohammed Rafi KC
Rahul Hinduja
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-09 10:33 EDT by nchilaka
Modified: 2018-03-07 13:24 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description nchilaka 2017-06-09 10:33:23 EDT
Description of problem:
===================
when we do a restart of glusterd the glusterd log shows the graph getting loaded as transport type as rdma , which is wrong

Version-Release number of selected component (if applicable):
====
3.8.4-27

How reproducible:
===
always

Steps to Reproduce:
1.have a cluster
2.keep viewing glusterd log using tailf glusterd 
3.do a restart of glusterd
[2017-06-07 12:56:23.753819] E [rpc-transport.c:283:rpc_transport_load] 0-rpc-transport: /usr/lib64/glusterfs/3.8.4/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
[2017-06-07 12:56:23.753847] W [rpc-transport.c:287:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
[2017-06-07 12:56:23.753862] W [rpcsvc.c:1646:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2017-06-07 12:56:23.753871] E [MSGID: 106243] [glusterd.c:1722:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2017-06-07 12:56:23.755148] E [MSGID: 101032] [store.c:433:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2017-06-07 12:56:23.755176] E [MSGID: 101032] [store.c:433:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2017-06-07 12:56:23.755182] I [MSGID: 106514] [glusterd-store.c:2123:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31100
[2017-06-07 12:56:23.755218] E [MSGID: 101032] [store.c:433:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/options. [No such file or directory]
[2017-06-07 12:56:23.757711] I [MSGID: 106194] [glusterd-store.c:3636:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
  1: volume management
  2:     type mgmt/glusterd
  3:     option rpc-auth.auth-glusterfs on
  4:     option rpc-auth.auth-unix on
  5:     option rpc-auth.auth-null on
  6:     option rpc-auth-allow-insecure on
  7:     option transport.socket.listen-backlog 128
  8:     option upgrade on
  9:     option event-threads 1
 10:     option ping-timeout 0
 11:     option transport.socket.read-fail-log off
 12:     option transport.socket.keepalive-interval 2
 13:     option transport.socket.keepalive-time 10
 14:     option transport-type rdma
 15:     option working-directory /var/lib/glusterd
 16: end-volume
 17:  
+------------------------------------------------------------------------------+
[2017-06-07 12:56:23.758306] I [MSGID: 101190] [event-epoll.c:602:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-07 12:56:23.758364] W [glusterfsd.c:1290:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7ffb29b88e25] -->glusterd(glusterfs_sigwaiter+0xe5) [0x7ffb2b222005] -->glusterd(cleanup_and_exit+0x6b) [0x7ffb2b221e2b] ) 0-: received signum (15), shutting down
[2017-06-07 12:57:11.413818] I [MSGID: 100030] [glusterfsd.c:2431:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.8.4 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2017-06-07 12:57:11.439378] I [MSGID: 106478] [glusterd.c:1449:init] 0-management: Maximum allowed open file descriptors set to 65536
[2017-06-07 12:57:11.439471] I [MSGID: 106479] [glusterd.c:1498:init] 0-management: Using /var/lib/glusterd as working directory
[2017-06-07 12:57:11.454640] W [MSGID: 103071] [rdma.c:4590:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2017-06-07 12:57:11.454680] W [MSGID: 103055] [rdma.c:4897:init] 0-rdma.management: Failed to initialize IB Device
[2017-06-07 12:57:11.454689] W [rpc-transport.c:350:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
[2017-06-07 12:57:11.454843] W [rpcsvc.c:1646:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed
[2017-06-07 12:57:11.454859] E [MSGID: 106243] [glusterd.c:1722:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport
[2017-06-07 12:57:14.921596] E [MSGID: 101032] [store.c:433:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2017-06-07 12:57:14.921723] E [MSGID: 101032] [store.c:433:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory]
[2017-06-07 12:57:14.921738] I [MSGID: 106514] [glusterd-store.c:2123:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 31100
[2017-06-07 12:57:14.922428] I [MSGID: 106194] [glusterd-store.c:3636:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list.
Final graph:
+------------------------------------------------------------------------------+
Comment 2 Atin Mukherjee 2017-06-09 11:38:00 EDT
No functionality impact, bug in dumping the output. As agreed moving this to 3.3.0-beyond.

Note You need to log in before you can comment on or make changes to this bug.