Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 868463 Details for
Bug 1070685
glusterfs ipv6 functionality not working
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
Trace logs when peer probe failed & patch
buglog1 (text/plain), 26.67 KB, created by
nithin.kumar.d
on 2014-02-27 11:06:33 UTC
(
hide
)
Description:
Trace logs when peer probe failed & patch
Filename:
MIME Type:
Creator:
nithin.kumar.d
Created:
2014-02-27 11:06:33 UTC
Size:
26.67 KB
patch
obsolete
>root@appvm2-VirtualBox:~# gluster volume info all >No volumes present >root@appvm2-VirtualBox:~# gluster peer probe fec0::d6be:d9ff:fe00:6535 >Probe unsuccessful >Probe returned with unknown errno 107 >root@appvm2-VirtualBox:~# cat logfile >[2014-02-27 10:20:00.603076] I [glusterfsd.c:1670:main] 0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd version 3.3.2 >[2014-02-27 10:20:00.603248] T [glusterfsd.c:212:create_fuse_mount] 0-: mount point not found, not a client process >[2014-02-27 10:20:00.605260] D [glusterfsd.c:457:get_volfp] 0-glusterfsd: loading volume file /usr/local/etc/glusterfs/glusterd.vol >[2014-02-27 10:20:00.605477] T [graph.y:195:new_volume] 0-parser: New node for 'management' >[2014-02-27 10:20:00.605494] T [xlator.c:198:xlator_dynload] 0-xlator: attempt to load file /usr/local/lib/glusterfs/3.3.2/xlator/mgmt/glusterd.so >[2014-02-27 10:20:00.605919] T [xlator.c:250:xlator_dynload] 0-xlator: dlsym(reconfigure) on /usr/local/lib/glusterfs/3.3.2/xlator/mgmt/glusterd.so: undefined symbol: reconfigure -- neglecting >[2014-02-27 10:20:00.605935] T [graph.y:226:volume_type] 0-parser: Type:management:mgmt/glusterd >[2014-02-27 10:20:00.605951] T [graph.y:261:volume_option] 0-parser: Option:management:working-directory:/var/lib/glusterd >[2014-02-27 10:20:00.605962] T [graph.y:261:volume_option] 0-parser: Option:management:transport-type:socket,rdma >[2014-02-27 10:20:00.605971] T [graph.y:261:volume_option] 0-parser: Option:management:transport.socket.keepalive-time:10 >[2014-02-27 10:20:00.605979] T [graph.y:261:volume_option] 0-parser: Option:management:transport.socket.keepalive-interval:2 >[2014-02-27 10:20:00.605986] T [graph.y:261:volume_option] 0-parser: Option:management:transport.socket.read-fail-log:off >[2014-02-27 10:20:00.605993] T [graph.y:261:volume_option] 0-parser: Option:management:transport.address-family:inet6 >[2014-02-27 10:20:00.605998] T [graph.y:333:volume_end] 0-parser: end:management >[2014-02-27 10:20:00.606076] I [glusterd.c:807:init] 0-management: Using /var/lib/glusterd as working directory >[2014-02-27 10:20:00.606117] D [glusterd.c:243:glusterd_rpcsvc_options_build] 0-: listen-backlog value: 128 >[2014-02-27 10:20:00.606134] T [rpcsvc.c:1853:rpcsvc_init] 0-rpc-service: rx pool: 64 >[2014-02-27 10:20:00.606232] T [rpcsvc-auth.c:119:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS >[2014-02-27 10:20:00.606244] T [rpcsvc-auth.c:119:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_GLUSTERFS-v2 >[2014-02-27 10:20:00.606250] T [rpcsvc-auth.c:119:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_UNIX >[2014-02-27 10:20:00.606257] T [rpcsvc-auth.c:119:rpcsvc_auth_init_auth] 0-rpc-service: Authentication enabled: AUTH_NULL >[2014-02-27 10:20:00.606263] D [rpcsvc.c:1872:rpcsvc_init] 0-rpc-service: RPC service inited. >[2014-02-27 10:20:00.606270] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1, Port: 0 >[2014-02-27 10:20:00.606291] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/local/lib/glusterfs/3.3.2/rpc-transport/socket.so >[2014-02-27 10:20:00.606395] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.listen-backlog 128' >[2014-02-27 10:20:00.606427] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-interval 2' >[2014-02-27 10:20:00.606439] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-time 10' >[2014-02-27 10:20:00.606476] T [socket.c:370:__socket_nodelay] 0-management: NODELAY enabled for socket 7 >[2014-02-27 10:20:00.606514] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/local/lib/glusterfs/3.3.2/rpc-transport/rdma.so >[2014-02-27 10:20:00.606536] E [rpc-transport.c:252:rpc_transport_load] 0-rpc-transport: /usr/local/lib/glusterfs/3.3.2/rpc-transport/rdma.so: cannot open shared object file: No such file or directory >[2014-02-27 10:20:00.606544] E [rpc-transport.c:256:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine >[2014-02-27 10:20:00.606550] W [rpcsvc.c:1356:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed >[2014-02-27 10:20:00.606562] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: GlusterD svc peer, Num: 1238437, Ver: 2, Port: 0 >[2014-02-27 10:20:00.606569] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: GlusterD svc cli, Num: 1238463, Ver: 2, Port: 0 >[2014-02-27 10:20:00.606575] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: GlusterD svc mgmt, Num: 1238433, Ver: 2, Port: 0 >[2014-02-27 10:20:00.606581] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: Gluster Portmap, Num: 34123456, Ver: 1, Port: 0 >[2014-02-27 10:20:00.606586] D [rpcsvc.c:1636:rpcsvc_program_register] 0-rpc-service: New program registered: GlusterFS Handshake, Num: 14398633, Ver: 2, Port: 0 >[2014-02-27 10:20:00.606601] D [glusterd-utils.c:4671:glusterd_sm_tr_log_init] 0-: returning 0 >[2014-02-27 10:20:00.606629] D [glusterd-store.c:1308:glusterd_store_handle_new] 0-: Returning 0 >[2014-02-27 10:20:00.606638] D [glusterd-store.c:1326:glusterd_store_handle_retrieve] 0-: Returning 0 >[2014-02-27 10:20:00.606663] D [glusterd-store.c:1203:glusterd_store_retrieve_value] 0-: key UUID read >[2014-02-27 10:20:00.606671] D [glusterd-store.c:1206:glusterd_store_retrieve_value] 0-: key UUID found >[2014-02-27 10:20:00.606682] D [glusterd-store.c:1453:glusterd_retrieve_uuid] 0-: Returning 0 >[2014-02-27 10:20:00.606696] I [glusterd.c:95:glusterd_uuid_init] 0-glusterd: retrieved UUID: bcaa7115-8fea-423e-a6ff-c0e3de0aaf3a >[2014-02-27 10:20:00.657717] D [glusterd.c:298:glusterd_check_gsync_present] 0-glusterd: Returning 0 >[2014-02-27 10:20:00.657781] D [glusterd.c:404:glusterd_crt_georep_folders] 0-: Returning 0 >[2014-02-27 10:20:01.399248] D [glusterd-store.c:2216:glusterd_store_retrieve_volumes] 0-: Returning with 0 >[2014-02-27 10:20:01.399305] D [glusterd-store.c:2564:glusterd_store_retrieve_peers] 0-: Returning with 0 >[2014-02-27 10:20:01.399319] D [glusterd-store.c:2594:glusterd_resolve_all_bricks] 0-: Returning with 0 >[2014-02-27 10:20:01.399328] D [glusterd-store.c:2621:glusterd_restore] 0-: Returning 0 >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume management > 2: type mgmt/glusterd > 3: option working-directory /var/lib/glusterd > 4: option transport-type socket,rdma > 5: option transport.socket.keepalive-time 10 > 6: option transport.socket.keepalive-interval 2 > 7: option transport.socket.read-fail-log off > 8: option transport.address-family inet6 > 9: end-volume > >+------------------------------------------------------------------------------+ >[2014-02-27 10:20:03.870332] T [socket.c:370:__socket_nodelay] 0-management: NODELAY enabled for socket 5 >[2014-02-27 10:20:03.870371] T [socket.c:424:__socket_keepalive] 0-management: Keep-alive enabled for socket 5, interval 2, idle: 10 >[2014-02-27 10:20:03.928368] T [rpcsvc.c:470:rpcsvc_handle_rpc_call] 0-rpcsvc: Client port: 1021 >[2014-02-27 10:20:03.928417] T [rpcsvc-auth.c:305:rpcsvc_auth_request_init] 0-rpc-service: Auth handler: AUTH_GLUSTERFS-v2 >[2014-02-27 10:20:03.928429] T [rpcsvc.c:382:rpcsvc_request_create] 0-rpc-service: received rpc-message (XID: 0x1, Ver: 2, Program: 1238463, ProgVers: 2, Proc: 5) from rpc-transport (socket.management) >[2014-02-27 10:20:03.928449] T [auth-glusterfs.c:212:auth_glusterfs_v2_authenticate] 0-rpc-service: Auth Info: pid: 0, uid: 0, gid: 0, owner: 00000000 >[2014-02-27 10:20:03.928460] T [rpcsvc.c:211:rpcsvc_program_actor] 0-rpc-service: Actor found: GlusterD svc cli - GET_VOLUME >[2014-02-27 10:20:03.928471] I [glusterd-handler.c:866:glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req >[2014-02-27 10:20:03.928500] T [rpcsvc.c:1050:rpcsvc_submit_generic] 0-rpc-service: Tx message: 36 >[2014-02-27 10:20:03.928514] T [rpcsvc.c:676:rpcsvc_record_build_header] 0-rpc-service: Reply fraglen 60, payload: 36, rpc hdr: 24 >[2014-02-27 10:20:03.928533] T [rpcsvc.c:1087:rpcsvc_submit_generic] 0-rpc-service: submitted reply for rpc-message (XID: 0x1x, Program: GlusterD svc cli, ProgVers: 2, Proc: 5) to rpc-transport (socket.management) >[2014-02-27 10:20:03.929410] D [socket.c:184:__socket_rwv] 0-socket.management: EOF from peer 127.0.0.1:1021 >[2014-02-27 10:20:03.929433] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2014-02-27 10:20:03.929474] T [socket.c:2727:fini] 0-socket.management: transport 0x16a0ca0 destroyed >[2014-02-27 10:20:06.054335] T [socket.c:370:__socket_nodelay] 0-management: NODELAY enabled for socket 5 >[2014-02-27 10:20:06.054377] T [socket.c:424:__socket_keepalive] 0-management: Keep-alive enabled for socket 5, interval 2, idle: 10 >[2014-02-27 10:20:06.107241] T [rpcsvc.c:470:rpcsvc_handle_rpc_call] 0-rpcsvc: Client port: 1020 >[2014-02-27 10:20:06.107278] T [rpcsvc-auth.c:305:rpcsvc_auth_request_init] 0-rpc-service: Auth handler: AUTH_GLUSTERFS-v2 >[2014-02-27 10:20:06.107286] T [rpcsvc.c:382:rpcsvc_request_create] 0-rpc-service: received rpc-message (XID: 0x1, Ver: 2, Program: 1238463, ProgVers: 2, Proc: 1) from rpc-transport (socket.management) >[2014-02-27 10:20:06.107299] T [auth-glusterfs.c:212:auth_glusterfs_v2_authenticate] 0-rpc-service: Auth Info: pid: 0, uid: 0, gid: 0, owner: 00000000 >[2014-02-27 10:20:06.107307] T [rpcsvc.c:211:rpcsvc_program_actor] 0-rpc-service: Actor found: GlusterD svc cli - CLI_PROBE >[2014-02-27 10:20:06.107334] I [glusterd-handler.c:685:glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req fec0::d6be:d9ff:fe00:6535 24007 >[2014-02-27 10:20:06.107412] D [glusterd-utils.c:234:glusterd_is_local_addr] 0-management: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.107434] D [glusterd-utils.c:234:glusterd_is_local_addr] 0-management: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.107447] D [glusterd-utils.c:234:glusterd_is_local_addr] 0-management: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.107458] D [glusterd-utils.c:255:glusterd_is_local_addr] 0-management: fec0::d6be:d9ff:fe00:6535 is not local >[2014-02-27 10:20:06.108373] D [glusterd-utils.c:4164:glusterd_friend_find_by_hostname] 0-management: Unable to find friend: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.108756] D [glusterd-utils.c:4164:glusterd_friend_find_by_hostname] 0-management: Unable to find friend: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.108768] I [glusterd-handler.c:428:glusterd_friend_find] 0-glusterd: Unable to find hostname: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.108775] I [glusterd-handler.c:2245:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: fec0::d6be:d9ff:fe00:6535 (24007) >[2014-02-27 10:20:06.108788] D [glusterd-utils.c:4671:glusterd_sm_tr_log_init] 0-: returning 0 >[2014-02-27 10:20:06.108795] D [glusterd-utils.c:4760:glusterd_peerinfo_new] 0-: returning 0 >[2014-02-27 10:20:06.108816] D [glusterd-handler.c:2158:glusterd_transport_inet_options_build] 0-glusterd: Returning 0 >[2014-02-27 10:20:06.108830] D [glusterd-store.c:2297:glusterd_store_create_peer_dir] 0-: Returning with 0 >[2014-02-27 10:20:06.108863] D [glusterd-store.c:1308:glusterd_store_handle_new] 0-: Returning 0 >[2014-02-27 10:20:06.108909] D [glusterd-store.c:1263:glusterd_store_save_value] 0-: returning: 0 >[2014-02-27 10:20:06.108924] D [glusterd-store.c:1263:glusterd_store_save_value] 0-: returning: 0 >[2014-02-27 10:20:06.108936] D [glusterd-store.c:1263:glusterd_store_save_value] 0-: returning: 0 >[2014-02-27 10:20:06.108941] D [glusterd-store.c:2411:glusterd_store_peer_write] 0-: Returning with 0 >[2014-02-27 10:20:06.109127] D [glusterd-store.c:2439:glusterd_store_perform_peer_store] 0-: Returning 0 >[2014-02-27 10:20:06.109143] D [glusterd-store.c:2460:glusterd_store_peerinfo] 0-: Returning with 0 >[2014-02-27 10:20:06.109173] I [rpc-clnt.c:965:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 >[2014-02-27 10:20:06.109183] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/local/lib/glusterfs/3.3.2/rpc-transport/socket.so >[2014-02-27 10:20:06.109206] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-time 10' >[2014-02-27 10:20:06.109221] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option transport.socket.keepalive-interval 2' >[2014-02-27 10:20:06.109231] T [options.c:77:xlator_option_validate_int] 0-management: no range check required for 'option remote-port 24007' >[2014-02-27 10:20:06.109249] T [rpc-clnt.c:426:rpc_clnt_reconnect] 0-management: attempting reconnect >[2014-02-27 10:20:06.109260] T [common-utils.c:111:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.109296] E [common-utils.c:125:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Address family for hostname not supported) >[2014-02-27 10:20:06.109305] E [name.c:243:af_inet_client_get_remote_sockaddr] 0-management: DNS resolution failed on host fec0::d6be:d9ff:fe00:6535 >[2014-02-27 10:20:06.109344] D [glusterd-handler.c:2910:glusterd_peer_rpc_notify] 0-management: got RPC_CLNT_DISCONNECT 0 >[2014-02-27 10:20:06.109363] D [glusterd-op-sm.c:4539:glusterd_op_sm_inject_event] 0-glusterd: Enqueue event: 'GD_OP_EVENT_LOCAL_UNLOCK_NO_RESP' >[2014-02-27 10:20:06.109378] T [rpcsvc.c:1050:rpcsvc_submit_generic] 0-rpc-service: Tx message: 48 >[2014-02-27 10:20:06.109387] T [rpcsvc.c:676:rpcsvc_record_build_header] 0-rpc-service: Reply fraglen 72, payload: 48, rpc hdr: 24 >[2014-02-27 10:20:06.109415] T [rpcsvc.c:1087:rpcsvc_submit_generic] 0-rpc-service: submitted reply for rpc-message (XID: 0x1x, Program: GlusterD svc cli, ProgVers: 2, Proc: 1) to rpc-transport (socket.management) >[2014-02-27 10:20:06.109427] I [glusterd-handler.c:2423:glusterd_xfer_cli_probe_resp] 0-glusterd: Responded to CLI, ret: 0 >[2014-02-27 10:20:06.109433] D [glusterd-sm.c:949:glusterd_friend_sm_inject_event] 0-glusterd: Enqueue event: 'GD_FRIEND_EVENT_REMOVE_FRIEND' >[2014-02-27 10:20:06.109439] D [glusterd-sm.c:1004:glusterd_friend_sm] 0-: Dequeued event of type: 'GD_FRIEND_EVENT_REMOVE_FRIEND' >[2014-02-27 10:20:06.109452] D [glusterd-utils.c:5331:glusterd_friend_remove_cleanup_vols] 0-management: Returning 0 >[2014-02-27 10:20:06.109463] T [rpc-clnt.c:532:rpc_clnt_connection_cleanup] 0-management: cleaning up state in transport object 0x16ae4f0 >[2014-02-27 10:20:06.109475] T [socket.c:2727:fini] 0-management: transport 0x16ae4f0 destroyed >[2014-02-27 10:20:06.109484] I [mem-pool.c:576:mem_pool_destroy] 0-management: size=2236 max=0 total=0 >[2014-02-27 10:20:06.109490] I [mem-pool.c:576:mem_pool_destroy] 0-management: size=124 max=0 total=0 >[2014-02-27 10:20:06.109528] D [glusterd-store.c:1347:glusterd_store_handle_destroy] 0-: Returning 0 >[2014-02-27 10:20:06.109536] D [glusterd-store.c:2272:glusterd_store_delete_peerinfo] 0-: Returning with 0 >[2014-02-27 10:20:06.109543] D [glusterd-op-sm.c:4611:glusterd_op_sm] 0-: Dequeued event of type: 'GD_OP_EVENT_LOCAL_UNLOCK_NO_RESP' >[2014-02-27 10:20:06.109549] D [glusterd-op-sm.c:1600:glusterd_op_ac_none] 0-: Returning with 0 >[2014-02-27 10:20:06.109555] D [glusterd-utils.c:4717:glusterd_sm_tr_log_transition_add] 0-glusterd: Transitioning from 'Default' to 'Default' due to event 'GD_OP_EVENT_LOCAL_UNLOCK_NO_RESP' >[2014-02-27 10:20:06.109561] D [glusterd-utils.c:4719:glusterd_sm_tr_log_transition_add] 0-: returning 0 >[2014-02-27 10:20:06.109566] D [glusterd-handler.c:2097:glusterd_rpc_create] 0-: returning 0 >[2014-02-27 10:20:06.109572] I [glusterd-handler.c:2227:glusterd_friend_add] 0-management: connect returned 0 >[2014-02-27 10:20:06.109576] D [glusterd-handler.c:2276:glusterd_probe_begin] 0-: returning 100 >[2014-02-27 10:20:06.110673] D [socket.c:184:__socket_rwv] 0-socket.management: EOF from peer 127.0.0.1:1020 >[2014-02-27 10:20:06.110690] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2014-02-27 10:20:06.110725] T [socket.c:2727:fini] 0-socket.management: transport 0x16a0ca0 destroyed >root@appvm2-VirtualBox:~# cat /usr/local/etc/glusterfs/glusterd.vol >volume management > type mgmt/glusterd > option working-directory /var/lib/glusterd > option transport-type socket,rdma > option transport.socket.keepalive-time 10 > option transport.socket.keepalive-interval 2 > option transport.socket.read-fail-log off > option transport.address-family inet6 >end-volume > >root@appvm2-VirtualBox:~# cat /etc/glusterfs/glusterd.info >UUID=7a24a146-6c1b-431b-a6d6-8ceabf6dcaf2 > > >root@appvm2-VirtualBox:~# netstat -anp | grep gluster >tcp6 0 0 :::24007 :::* LISTEN 23135/glusterd > >root@appvm2-VirtualBox:~# ifconfig eth0 >eth0 Link encap:Ethernet HWaddr 08:00:27:60:ae:51 > inet addr:100.96.3.173 Bcast:100.96.3.255 Mask:255.255.255.0 > inet6 addr: fec0::a00:27ff:fe60:ae51/64 Scope:Site > inet6 addr: fe80::a00:27ff:fe60:ae51/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:40102 errors:0 dropped:0 overruns:0 frame:0 > TX packets:9396 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:5582051 (5.5 MB) TX bytes:4763289 (4.7 MB) > >root@appvm2-VirtualBox:~# ping6 fec0::d6be:d9ff:fe00:6535 >PING fec0::d6be:d9ff:fe00:6535(fec0::d6be:d9ff:fe00:6535) 56 data bytes >64 bytes from fec0::d6be:d9ff:fe00:6535: icmp_seq=1 ttl=64 time=0.385 ms >^C >--- fec0::d6be:d9ff:fe00:6535 ping statistics --- >1 packets transmitted, 1 received, 0% packet loss, time 0ms >rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms >root@appvm2-VirtualBox:~# > > > > > > > > > > > > > > > > > > >##################################################################################################################### >Patch generated using git diff -p on 3.3.2 >##################################################################################################################### > > >diff --git a/cli/src/cli-cmd-volume.c b/cli/src/cli-cmd-volume.c >index bad9351..7292940 100644 >--- a/cli/src/cli-cmd-volume.c >+++ b/cli/src/cli-cmd-volume.c >@@ -256,6 +256,7 @@ cli_cmd_check_brick_order (struct cli_state *state, const char *bricks, > brick_list = tmpptr; > if (brick == NULL) > goto check_failed; >+ //TODO > brick = strtok_r (brick, ":", &tmpptr); > if (brick == NULL) > goto check_failed; >diff --git a/doc/glusterd.vol b/doc/glusterd.vol >index de17d8f..1ff5bbd 100644 >--- a/doc/glusterd.vol >+++ b/doc/glusterd.vol >@@ -5,4 +5,5 @@ volume management > option transport.socket.keepalive-time 10 > option transport.socket.keepalive-interval 2 > option transport.socket.read-fail-log off >+ option transport.address-family inet6 > end-volume >diff --git a/libglusterfs/src/common-utils.c b/libglusterfs/src/common-utils.c >index dbcee77..255b394 100644 >--- a/libglusterfs/src/common-utils.c >+++ b/libglusterfs/src/common-utils.c >@@ -113,7 +113,7 @@ gf_resolve_ip6 (const char *hostname, > memset(&hints, 0, sizeof(hints)); > hints.ai_family = family; > hints.ai_socktype = SOCK_STREAM; >- hints.ai_flags = AI_ADDRCONFIG; >+ hints.ai_flags = AI_ALL; > > ret = gf_asprintf (&port_str, "%d", port); > if (-1 == ret) { >@@ -1694,6 +1694,8 @@ valid_ipv6_address (char *address, int length, gf_boolean_t wildcard_acc) > > tmp = gf_strdup (address); > >+ goto out; >+ > /* Check for compressed form */ > if (tmp[length - 1] == ':') { > ret = 0; >diff --git a/rpc/rpc-lib/src/rpc-transport.c b/rpc/rpc-lib/src/rpc-transport.c >index 8da898b..9f97815 100644 >--- a/rpc/rpc-lib/src/rpc-transport.c >+++ b/rpc/rpc-lib/src/rpc-transport.c >@@ -585,7 +585,7 @@ rpc_transport_inet_options_build (dict_t **options, const char *hostname, > "failed to set remote-port with %d", port); > goto out; > } >- ret = dict_set_str (dict, "transport.address-family", "inet"); >+ ret = dict_set_str (dict, "transport.address-family", "inet6"); > if (ret) { > gf_log (THIS->name, GF_LOG_WARNING, > "failed to set addr-family with inet"); >diff --git a/rpc/rpc-transport/socket/src/name.c b/rpc/rpc-transport/socket/src/name.c >index d37c83e..8f139de 100644 >--- a/rpc/rpc-transport/socket/src/name.c >+++ b/rpc/rpc-transport/socket/src/name.c >@@ -147,7 +147,7 @@ client_fill_address_family (rpc_transport_t *this, sa_family_t *sa_family) > gf_log (this->name, GF_LOG_DEBUG, > "address-family not specified, guessing it " > "to be inet from (remote-host: %s)", data_to_str (remote_host_data)); >- *sa_family = AF_INET; >+ *sa_family = AF_INET6; > } else { > gf_log (this->name, GF_LOG_DEBUG, > "address-family not specified, guessing it " >@@ -160,7 +160,7 @@ client_fill_address_family (rpc_transport_t *this, sa_family_t *sa_family) > if (!strcasecmp (address_family, "unix")) { > *sa_family = AF_UNIX; > } else if (!strcasecmp (address_family, "inet")) { >- *sa_family = AF_INET; >+ *sa_family = AF_UNSPEC; > } else if (!strcasecmp (address_family, "inet6")) { > *sa_family = AF_INET6; > } else if (!strcasecmp (address_family, "inet-sdp")) { >diff --git a/xlators/mgmt/glusterd/src/glusterd-replace-brick.c b/xlators/mgmt/glusterd/src/glusterd-replace-brick.c >index 0671969..9362d7e 100644 >--- a/xlators/mgmt/glusterd/src/glusterd-replace-brick.c >+++ b/xlators/mgmt/glusterd/src/glusterd-replace-brick.c >@@ -201,6 +201,7 @@ glusterd_op_stage_replace_brick (dict_t *dict, char **op_errstr, > char *path = NULL; > char msg[2048] = {0}; > char *dup_dstbrick = NULL; >+ char *dup_dstbrick2 = NULL; > glusterd_peerinfo_t *peerinfo = NULL; > glusterd_brickinfo_t *dst_brickinfo = NULL; > gf_boolean_t is_run = _gf_false; >@@ -418,13 +419,26 @@ glusterd_op_stage_replace_brick (dict_t *dict, char **op_errstr, > } > > dup_dstbrick = gf_strdup (dst_brick); >+ dup_dstbrick2 = gf_strdup (dst_brick); > if (!dup_dstbrick) { > ret = -1; > gf_log ("", GF_LOG_ERROR, "Memory allocation failed"); > goto out; > } >- host = strtok_r (dup_dstbrick, ":", &savetok); >+ >+ //TODO >+ host = strtok_r (dup_dstbrick, "/", &savetok); >+ path = strtok_r (NULL, ":", &savetok); >+ path--; >+ path--; >+ path[0] = '\0'; >+ path = strtok_r (dup_dstbrick2, "/", &savetok); > path = strtok_r (NULL, ":", &savetok); >+ path--; >+ path[0] = '/'; >+ gf_log ("", GF_LOG_ERROR, >+ "dst %s %s TODO", >+ host?host:"none", path?path:"none"); > > if (!host || !path) { > gf_log ("", GF_LOG_ERROR, >@@ -502,6 +516,8 @@ glusterd_op_stage_replace_brick (dict_t *dict, char **op_errstr, > out: > if (dup_dstbrick) > GF_FREE (dup_dstbrick); >+ if (dup_dstbrick2) >+ GF_FREE (dup_dstbrick2); > gf_log ("", GF_LOG_DEBUG, "Returning %d", ret); > > return ret; >diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c >index eada07c..6ad036a 100644 >--- a/xlators/mgmt/glusterd/src/glusterd-utils.c >+++ b/xlators/mgmt/glusterd/src/glusterd-utils.c >@@ -4139,6 +4139,7 @@ glusterd_friend_find_by_hostname (const char *hoststr, > default: ret = -1; > goto out; > } >+ //TODO > host = inet_ntoa(*in_addr); > > ret = getnameinfo (p->ai_addr, p->ai_addrlen, hname, >diff --git a/xlators/mgmt/glusterd/src/glusterd-volgen.c b/xlators/mgmt/glusterd/src/glusterd-volgen.c >index d6bb77e..2972e53 100644 >--- a/xlators/mgmt/glusterd/src/glusterd-volgen.c >+++ b/xlators/mgmt/glusterd/src/glusterd-volgen.c >@@ -1691,6 +1691,12 @@ server_graph_builder (volgen_graph_t *graph, glusterd_volinfo_t *volinfo, > if (ret) > return -1; > >+ ret = xlator_set_option (rbxl, "transport.address-family", >+ "inet6"); >+ >+ if (ret) >+ return -1; >+ > xl = volgen_graph_add_nolink (graph, "cluster/pump", "%s-pump", > volname); > if (!xl) >@@ -1752,9 +1758,18 @@ server_graph_builder (volgen_graph_t *graph, glusterd_volinfo_t *volinfo, > if (!xl) > return -1; > ret = xlator_set_option (xl, "transport-type", transt); >+ > if (ret) > return -1; > >+ >+ ret = xlator_set_option (xl, "transport.address-family", >+ "inet6"); >+ >+ if (ret) >+ return -1; >+ >+ > if (username) { > memset (key, 0, sizeof (key)); > snprintf (key, sizeof (key), "auth.login.%s.allow", path); >@@ -2181,6 +2196,11 @@ volgen_graph_build_clients (volgen_graph_t *graph, glusterd_volinfo_t *volinfo, > ret = xlator_set_option (xl, "transport-type", transt); > if (ret) > goto out; >+ >+ ret = xlator_set_option (xl, "transport.address-family", >+ "inet6"); >+ if (ret) >+ return -1; > > ret = dict_get_uint32 (set_dict, "trusted-client", > &client_type); >diff --git a/xlators/protocol/client/src/client.c b/xlators/protocol/client/src/client.c >index 65df70f..1fcf84d 100644 >--- a/xlators/protocol/client/src/client.c >+++ b/xlators/protocol/client/src/client.c >@@ -1221,6 +1221,7 @@ client_set_remote_options (char *value, xlator_t *this) > int remote_port = 0; > gf_boolean_t ret = _gf_false; > >+ //TODO > dup_value = gf_strdup (value); > host = strtok_r (dup_value, ":", &tmp); > subvol = strtok_r (NULL, ":", &tmp);
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1070685
: 868463