Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 618437 Details for
Bug 861314
[RHEV-RHS]: split-brains while self-healing the Virtual Machines
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
glustershd log.
glustershd.log (text/x-log), 315.00 KB, created by
spandura
on 2012-09-28 06:55:20 UTC
(
hide
)
Description:
glustershd log.
Filename:
MIME Type:
Creator:
spandura
Created:
2012-09-28 06:55:20 UTC
Size:
315.00 KB
patch
obsolete
>[2012-09-25 05:12:00.014882] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 05:12:00.024625] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-replicate-1: adding option 'node-uuid' for volume 'dist-rep-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 05:12:00.024656] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-replicate-0: adding option 'node-uuid' for volume 'dist-rep-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 05:12:00.032333] I [client.c:2142:notify] 0-dist-rep-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 05:12:00.038528] I [client.c:2142:notify] 0-dist-rep-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 05:12:00.044439] I [client.c:2142:notify] 0-dist-rep-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 05:12:00.048484] I [client.c:2142:notify] 0-dist-rep-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1/brick1 > 5: option transport-type tcp > 6: option username 0f612b43-1857-4e77-85af-8c7ecbe41f09 > 7: option password b4264da3-b07d-4cad-a6f7-c819a3790ef7 > 8: end-volume > 9: > 10: volume dist-rep-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1/brick1 > 14: option transport-type tcp > 15: option username 0f612b43-1857-4e77-85af-8c7ecbe41f09 > 16: option password b4264da3-b07d-4cad-a6f7-c819a3790ef7 > 17: end-volume > 18: > 19: volume dist-rep-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1/brick1 > 23: option transport-type tcp > 24: option username 0f612b43-1857-4e77-85af-8c7ecbe41f09 > 25: option password b4264da3-b07d-4cad-a6f7-c819a3790ef7 > 26: end-volume > 27: > 28: volume dist-rep-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1/brick1 > 32: option transport-type tcp > 33: option username 0f612b43-1857-4e77-85af-8c7ecbe41f09 > 34: option password b4264da3-b07d-4cad-a6f7-c819a3790ef7 > 35: end-volume > 36: > 37: volume dist-rep-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-client-0 dist-rep-client-1 > 46: end-volume > 47: > 48: volume dist-rep-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-client-2 dist-rep-client-3 > 57: end-volume > 58: > 59: volume glustershd > 60: type debug/io-stats > 61: subvolumes dist-rep-replicate-0 dist-rep-replicate-1 > 62: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 05:12:00.055527] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-client-0: failed to get the port number for remote subvolume >[2012-09-25 05:12:00.055642] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-client-1: failed to get the port number for remote subvolume >[2012-09-25 05:12:00.055772] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-0: disconnected >[2012-09-25 05:12:00.055837] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-1: disconnected >[2012-09-25 05:12:00.055863] E [afr-common.c:3668:afr_notify] 0-dist-rep-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 05:12:00.055918] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-client-2: failed to get the port number for remote subvolume >[2012-09-25 05:12:00.055994] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-client-3: changing port to 24009 (from 0) >[2012-09-25 05:12:00.056092] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-2: disconnected >[2012-09-25 05:12:04.004006] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 05:12:04.004070] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 05:12:04.028535] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-client-0: changing port to 24009 (from 0) >[2012-09-25 05:12:04.032267] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-client-1: changing port to 24009 (from 0) >[2012-09-25 05:12:04.036572] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-client-2: changing port to 24009 (from 0) >[2012-09-25 05:12:04.041824] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 05:12:04.042316] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-client-3: Connected to 10.70.36.33:24009, attached to remote volume '/disk1/brick1'. >[2012-09-25 05:12:04.042344] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 05:12:04.042428] I [afr-common.c:3631:afr_notify] 0-dist-rep-replicate-1: Subvolume 'dist-rep-client-3' came back up; going online. >[2012-09-25 05:12:04.042622] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-client-3: Server lk version = 1 >[2012-09-25 05:12:07.054871] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 05:12:07.055176] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-client-1: Connected to 10.70.36.31:24009, attached to remote volume '/disk1/brick1'. >[2012-09-25 05:12:07.055202] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 05:12:07.055266] I [afr-common.c:3631:afr_notify] 0-dist-rep-replicate-0: Subvolume 'dist-rep-client-1' came back up; going online. >[2012-09-25 05:12:07.055400] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-client-1: Server lk version = 1 >[2012-09-25 05:12:07.055923] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-replicate-0: Stopping crawl as < 2 children are up >[2012-09-25 05:12:07.056092] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 05:12:07.056477] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-client-0: Connected to 10.70.36.30:24009, attached to remote volume '/disk1/brick1'. >[2012-09-25 05:12:07.056509] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 05:12:07.058380] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-client-0: Server lk version = 1 >[2012-09-25 05:12:07.059927] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 05:12:07.060322] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-client-2: Connected to 10.70.36.32:24009, attached to remote volume '/disk1/brick1'. >[2012-09-25 05:12:07.060348] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 05:12:07.060934] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-client-2: Server lk version = 1 >[2012-09-25 05:14:24.502065] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 05:14:25.536626] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 05:14:25.542675] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 05:22:07.131839] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-replicate-0: no active sinks for performing self-heal on file <gfid:9d7d0c4f-8b95-430f-accb-ca4983ec5722> >[2012-09-25 05:32:07.205724] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-replicate-0: no active sinks for performing self-heal on file <gfid:9d7d0c4f-8b95-430f-accb-ca4983ec5722> >[2012-09-25 05:52:07.351914] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-replicate-0: no active sinks for performing self-heal on file <gfid:340dea95-316d-45d2-b28c-79a70c0e9eec> >[2012-09-25 07:00:28.459654] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24009) >[2012-09-25 07:00:28.459769] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-3: disconnected >[2012-09-25 07:00:30.515012] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24009) >[2012-09-25 07:00:30.515072] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-0: disconnected >[2012-09-25 07:00:30.521987] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24009) >[2012-09-25 07:00:30.522086] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-2: disconnected >[2012-09-25 07:00:30.522120] E [afr-common.c:3668:afr_notify] 0-dist-rep-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:00:30.522272] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24009) >[2012-09-25 07:00:30.522372] I [client.c:2090:client_rpc_notify] 0-dist-rep-client-1: disconnected >[2012-09-25 07:00:30.522444] E [afr-common.c:3668:afr_notify] 0-dist-rep-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:00:31.570324] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 07:17:11.883450] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 07:17:11.894521] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:17:11.894552] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:17:11.902259] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 07:17:11.906772] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 07:17:11.910776] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 07:17:11.914840] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 7: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 16: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 25: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 34: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume glustershd > 60: type debug/io-stats > 61: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 62: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 07:17:11.919225] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-1: failed to get the port number for remote subvolume >[2012-09-25 07:17:11.919307] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-25 07:17:11.919395] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24010 (from 0) >[2012-09-25 07:17:11.919465] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-2: failed to get the port number for remote subvolume >[2012-09-25 07:17:11.919508] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-25 07:17:11.919579] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24010 (from 0) >[2012-09-25 07:17:15.339190] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 07:17:15.339248] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 07:17:15.897288] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24010 (from 0) >[2012-09-25 07:17:15.903371] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:17:15.903750] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24010, attached to remote volume '/disk1'. >[2012-09-25 07:17:15.903777] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:17:15.903860] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-0' came back up; going online. >[2012-09-25 07:17:15.903993] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-25 07:17:15.908553] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24010 (from 0) >[2012-09-25 07:17:15.912292] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:17:15.912673] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24010, attached to remote volume '/disk1'. >[2012-09-25 07:17:15.912697] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:17:15.912747] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-3' came back up; going online. >[2012-09-25 07:17:15.913320] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-25 07:17:18.920591] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:17:18.920912] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24010, attached to remote volume '/disk1'. >[2012-09-25 07:17:18.920948] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:17:18.921579] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-25 07:17:18.925054] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:17:18.925420] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24010, attached to remote volume '/disk1'. >[2012-09-25 07:17:18.925457] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:17:18.926054] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-25 07:17:45.400474] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:17:46.425079] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:17:46.428155] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:27:19.001182] I [afr-self-heal-entry.c:2333:afr_sh_entry_fix] 0-dist-rep-rhevh-replicate-0: <gfid:baf114e7-1fc3-45ac-add4-1ba3e03fc76f>: Performing conservative merge >[2012-09-25 07:27:19.002683] E [afr-self-heal-common.c:1399:afr_sh_common_lookup_cbk] 0-dist-rep-rhevh-replicate-0: Failed to lookup <gfid:baf114e7-1fc3-45ac-add4-1ba3e03fc76f>/dom_md, reason No such file or directory >[2012-09-25 07:27:19.003927] E [afr-self-heal-common.c:1399:afr_sh_common_lookup_cbk] 0-dist-rep-rhevh-replicate-0: Failed to lookup <gfid:baf114e7-1fc3-45ac-add4-1ba3e03fc76f>/dom_md, reason No such file or directory >[2012-09-25 07:28:58.839803] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24010) >[2012-09-25 07:28:58.839886] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-25 07:29:00.907857] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24010) >[2012-09-25 07:29:00.908106] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-25 07:29:00.909215] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24010) >[2012-09-25 07:29:00.909305] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-25 07:29:00.909334] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:29:00.916637] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24010) >[2012-09-25 07:29:00.916719] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-25 07:29:00.916744] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:29:01.969675] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 07:29:06.667032] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 07:29:06.676766] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:29:06.676797] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:29:06.684177] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 07:29:06.688474] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 07:29:06.692528] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 07:29:06.696461] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 7: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 16: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 25: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username 5dc2dc74-0763-4de3-a286-1d2597c03d09 > 34: option password d56b13da-b4a3-4637-8a46-21f2241659b0 > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume glustershd > 62: type debug/io-stats > 63: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 64: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 07:29:06.700957] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24010 (from 0) >[2012-09-25 07:29:06.701046] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24010 (from 0) >[2012-09-25 07:29:06.701127] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24010 (from 0) >[2012-09-25 07:29:06.701170] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24010 (from 0) >[2012-09-25 07:29:10.448785] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 07:29:10.448837] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 07:29:10.680353] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:29:10.680645] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24010, attached to remote volume '/disk1'. >[2012-09-25 07:29:10.680675] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:29:10.680753] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-25 07:29:10.680863] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-25 07:29:10.684437] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:29:10.684787] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24010, attached to remote volume '/disk1'. >[2012-09-25 07:29:10.684820] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:29:10.685435] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-25 07:29:10.689352] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:29:10.689734] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24010, attached to remote volume '/disk1'. >[2012-09-25 07:29:10.689760] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:29:10.689812] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-25 07:29:10.690292] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-25 07:29:10.693580] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:29:10.693913] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24010, attached to remote volume '/disk1'. >[2012-09-25 07:29:10.693938] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:29:10.694528] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-25 07:37:59.341801] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:37:59.349177] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:37:59.349469] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:37:59.349804] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:38:01.568460] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:38:01.575752] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:38:01.576044] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:38:01.576372] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:41:00.115478] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:41:00.122059] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:41:00.122341] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:41:00.122695] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:41:59.939486] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24010) >[2012-09-25 07:41:59.939540] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-25 07:42:02.002403] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24010) >[2012-09-25 07:42:02.002489] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-25 07:42:02.003081] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24010) >[2012-09-25 07:42:02.003149] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-25 07:42:02.003185] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:42:02.003252] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24010) >[2012-09-25 07:42:02.003369] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-25 07:42:02.003398] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 07:42:03.055752] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 07:48:22.854022] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 07:48:22.863683] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:48:22.863714] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 07:48:22.871385] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 07:48:22.875798] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 07:48:22.880109] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 07:48:22.884238] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume glustershd > 60: type debug/io-stats > 61: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 62: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 07:48:22.893536] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-25 07:48:22.893651] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-1: failed to get the port number for remote subvolume >[2012-09-25 07:48:22.893763] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-25 07:48:22.893795] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-2: failed to get the port number for remote subvolume >[2012-09-25 07:48:22.893845] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-25 07:48:22.893880] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-25 07:48:26.601921] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 07:48:26.601964] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 07:48:26.867370] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:48:26.867717] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-25 07:48:26.867744] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:48:26.867848] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-0' came back up; going online. >[2012-09-25 07:48:26.867971] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-25 07:48:26.873786] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-25 07:48:26.878611] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-25 07:48:26.882384] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:48:26.882783] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-25 07:48:26.882806] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:48:26.882866] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-3' came back up; going online. >[2012-09-25 07:48:26.883109] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-25 07:48:29.886606] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:48:29.886928] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-25 07:48:29.886963] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:48:29.888886] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-25 07:48:29.891473] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 07:48:29.891828] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-25 07:48:29.891853] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 07:48:29.892127] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-25 07:48:39.748335] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:48:40.774326] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 07:48:40.775994] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 07:58:29.963623] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-rhevh-replicate-0: no active sinks for performing self-heal on file <gfid:df883eec-46b8-4f46-8eaa-a9c8ceed0543> >[2012-09-25 08:03:22.615308] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:03:22.623120] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:03:22.623493] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 08:03:22.623829] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 08:03:28.075747] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:03:29.110293] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:03:29.111845] D [io-stats.c:2484:reconfigure] 0-glustershd: reconfigure returning 0 >[2012-09-25 08:03:29.111874] D [glusterfsd-mgmt.c:1592:mgmt_getspec_cbk] 0-glusterfsd-mgmt: No need to re-load volfile, reconfigure done >[2012-09-25 08:03:29.111907] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 08:08:30.041618] D [afr-self-heald.c:1135:afr_start_crawl] 0-dist-rep-rhevh-replicate-0: starting crawl 1 for dist-rep-rhevh-client-1 >[2012-09-25 08:08:30.043976] D [afr-self-heald.c:902:_crawl_directory] 0-dist-rep-rhevh-replicate-0: crawling INDEX 0d81c359-bc1d-46ca-96bc-3f7166775d94 >[2012-09-25 08:08:30.044339] D [afr-self-heald.c:365:_self_heal_entry] 0-dist-rep-rhevh-replicate-0: lookup <gfid:df883eec-46b8-4f46-8eaa-a9c8ceed0543> >[2012-09-25 08:08:30.044410] D [afr-common.c:132:afr_lookup_xattr_req_prepare] 0-dist-rep-rhevh-replicate-0: <gfid:df883eec-46b8-4f46-8eaa-a9c8ceed0543>: failed to get the gfid from dict >[2012-09-25 08:08:30.044988] D [afr-self-heald.c:282:_remove_stale_index] 0-dist-rep-rhevh-replicate-0: Removing stale index for df883eec-46b8-4f46-8eaa-a9c8ceed0543 on dist-rep-rhevh-client-1 >[2012-09-25 08:08:30.045745] D [afr-self-heald.c:1040:afr_dir_crawl] 0-dist-rep-rhevh-replicate-0: Crawl completed on dist-rep-rhevh-client-1 >[2012-09-25 08:09:12.050042] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:09:12.050113] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:18:30.131091] D [afr-self-heald.c:1135:afr_start_crawl] 0-dist-rep-rhevh-replicate-0: starting crawl 1 for dist-rep-rhevh-client-1 >[2012-09-25 08:18:30.131933] D [afr-self-heald.c:902:_crawl_directory] 0-dist-rep-rhevh-replicate-0: crawling INDEX 0d81c359-bc1d-46ca-96bc-3f7166775d94 >[2012-09-25 08:18:30.132299] D [afr-self-heald.c:365:_self_heal_entry] 0-dist-rep-rhevh-replicate-0: lookup <gfid:1a99a836-7df9-4599-bc86-8ad3c1867c4e> >[2012-09-25 08:18:30.132362] D [afr-common.c:132:afr_lookup_xattr_req_prepare] 0-dist-rep-rhevh-replicate-0: <gfid:1a99a836-7df9-4599-bc86-8ad3c1867c4e>: failed to get the gfid from dict >[2012-09-25 08:18:30.133020] D [afr-self-heald.c:282:_remove_stale_index] 0-dist-rep-rhevh-replicate-0: Removing stale index for 1a99a836-7df9-4599-bc86-8ad3c1867c4e on dist-rep-rhevh-client-1 >[2012-09-25 08:18:30.133401] D [afr-self-heald.c:365:_self_heal_entry] 0-dist-rep-rhevh-replicate-0: lookup <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.133453] D [afr-common.c:132:afr_lookup_xattr_req_prepare] 0-dist-rep-rhevh-replicate-0: <gfid:205109d1-11f1-4dce-8858-88f860b4c882>: failed to get the gfid from dict >[2012-09-25 08:18:30.133989] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.134028] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.134044] D [afr-self-heal-common.c:829:afr_mark_sources] 0-dist-rep-rhevh-replicate-0: Number of sources: 2 >[2012-09-25 08:18:30.134060] D [afr-self-heal-data.c:861:afr_lookup_select_read_child_by_txn_type] 0-dist-rep-rhevh-replicate-0: returning read_child: 1 >[2012-09-25 08:18:30.134073] D [afr-common.c:1294:afr_lookup_select_read_child] 0-dist-rep-rhevh-replicate-0: Source selected as 1 for <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.134092] D [afr-common.c:1097:afr_lookup_build_response_params] 0-dist-rep-rhevh-replicate-0: Building lookup response from 1 >[2012-09-25 08:18:30.134109] D [afr-common.c:1164:afr_lookup_set_self_heal_params_by_xattr] 0-dist-rep-rhevh-replicate-0: data self-heal is pending for <gfid:205109d1-11f1-4dce-8858-88f860b4c882>. >[2012-09-25 08:18:30.134124] D [afr-common.c:1164:afr_lookup_set_self_heal_params_by_xattr] 0-dist-rep-rhevh-replicate-0: data self-heal is pending for <gfid:205109d1-11f1-4dce-8858-88f860b4c882>. >[2012-09-25 08:18:30.134151] D [afr-common.c:1340:afr_launch_self_heal] 0-dist-rep-rhevh-replicate-0: background data self-heal triggered. path: <gfid:205109d1-11f1-4dce-8858-88f860b4c882>, reason: lookup detected pending operations >[2012-09-25 08:18:30.134178] D [afr-self-heal-metadata.c:69:afr_sh_metadata_done] 0-dist-rep-rhevh-replicate-0: proceeding to data check on <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.135065] D [afr-self-heal-data.c:1201:afr_sh_data_post_nonblocking_inodelk_cbk] 0-dist-rep-rhevh-replicate-0: Non Blocking data inodelks done for <gfid:205109d1-11f1-4dce-8858-88f860b4c882> by bc7082cf367f0000. Proceeding to self-heal >[2012-09-25 08:18:30.136010] D [afr-self-heal-data.c:738:afr_sh_data_fxattrop_fstat_done] 0-dist-rep-rhevh-replicate-0: Pending matrix for: bc7082cf367f0000 >[2012-09-25 08:18:30.136047] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.136062] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.136075] D [afr-self-heal-common.c:829:afr_mark_sources] 0-dist-rep-rhevh-replicate-0: Number of sources: 2 >[2012-09-25 08:18:30.136090] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-rhevh-replicate-0: no active sinks for performing self-heal on file <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.136103] D [afr-self-heal-data.c:320:afr_sh_data_finish] 0-dist-rep-rhevh-replicate-0: finishing data selfheal of <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.136116] D [afr-lk-common.c:408:transaction_lk_op] 0-dist-rep-rhevh-replicate-0: lk op is for a self heal >[2012-09-25 08:18:30.136555] D [afr-self-heal-data.c:138:afr_sh_data_close] 0-dist-rep-rhevh-replicate-0: closing fd of <gfid:205109d1-11f1-4dce-8858-88f860b4c882> on dist-rep-rhevh-client-0 >[2012-09-25 08:18:30.136599] D [afr-self-heal-data.c:138:afr_sh_data_close] 0-dist-rep-rhevh-replicate-0: closing fd of <gfid:205109d1-11f1-4dce-8858-88f860b4c882> on dist-rep-rhevh-client-1 >[2012-09-25 08:18:30.136974] D [afr-self-heal-common.c:2164:afr_self_heal_completion_cbk] 0-dist-rep-rhevh-replicate-0: background data self-heal completed on <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.137003] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.137017] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 2 2 ] >[2012-09-25 08:18:30.137030] D [afr-self-heal-common.c:829:afr_mark_sources] 0-dist-rep-rhevh-replicate-0: Number of sources: 2 >[2012-09-25 08:18:30.137042] D [afr-self-heal-data.c:861:afr_lookup_select_read_child_by_txn_type] 0-dist-rep-rhevh-replicate-0: returning read_child: 1 >[2012-09-25 08:18:30.137054] D [afr-common.c:1294:afr_lookup_select_read_child] 0-dist-rep-rhevh-replicate-0: Source selected as 1 for <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:18:30.137068] D [afr-common.c:1097:afr_lookup_build_response_params] 0-dist-rep-rhevh-replicate-0: Building lookup response from 1 >[2012-09-25 08:18:30.137106] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-dist-rep-rhevh-client-1: sending release on fd >[2012-09-25 08:18:30.137152] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-dist-rep-rhevh-client-0: sending release on fd >[2012-09-25 08:18:30.137700] D [afr-self-heald.c:1040:afr_dir_crawl] 0-dist-rep-rhevh-replicate-0: Crawl completed on dist-rep-rhevh-client-1 >[2012-09-25 08:19:12.136117] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:19:12.136185] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:22:02.756411] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-0: EOF from peer 10.70.36.30:24011 >[2012-09-25 08:22:02.756478] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24011) >[2012-09-25 08:22:02.756500] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:02.756553] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-25 08:22:04.824022] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-2: EOF from peer 10.70.36.32:24011 >[2012-09-25 08:22:04.824103] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-25 08:22:04.824134] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:04.824190] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-25 08:22:04.824227] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-1: EOF from peer 10.70.36.31:24011 >[2012-09-25 08:22:04.824274] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-25 08:22:04.824327] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:04.824427] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-25 08:22:04.824449] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 08:22:04.824511] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-3: EOF from peer 10.70.36.33:24011 >[2012-09-25 08:22:04.824535] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24011) >[2012-09-25 08:22:04.824558] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:04.824607] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-25 08:22:04.824631] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 08:22:05.885900] D [socket.c:184:__socket_rwv] 0-socket.glusterfsd: EOF from peer >[2012-09-25 08:22:05.885946] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:05.886286] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 08:22:05.886325] D [glusterfsd-mgmt.c:2157:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout arguments not given >[2012-09-25 08:22:05.886678] D [rpcsvc.c:1303:rpcsvc_program_unregister] 0-rpc-service: Program unregistered: Gluster Brick operations, Num: 4867634, Ver: 2, Port: 0 >[2012-09-25 08:22:12.199936] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 08:22:12.210624] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 08:22:12.210656] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 08:22:12.210735] D [options.c:1037:xlator_option_init_uint32] 0-dist-rep-rhevh-replicate-1: option background-self-heal-count using set value 0 >[2012-09-25 08:22:12.210759] D [options.c:1034:xlator_option_init_str] 0-dist-rep-rhevh-replicate-1: option data-self-heal using set value on >[2012-09-25 08:22:12.210789] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-1: option metadata-self-heal using set value on >[2012-09-25 08:22:12.210805] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-1: option entry-self-heal using set value on >[2012-09-25 08:22:12.210819] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-1: option self-heal-daemon using set value on >[2012-09-25 08:22:12.210831] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-1: option iam-self-heal-daemon using set value yes >[2012-09-25 08:22:12.210851] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-1: option eager-lock using set value enable >[2012-09-25 08:22:12.213265] D [options.c:1034:xlator_option_init_str] 0-dist-rep-rhevh-replicate-1: option node-uuid using set value b9d6cb21-051f-4791-9476-734856e77fbf >[2012-09-25 08:22:12.213295] D [options.c:1037:xlator_option_init_uint32] 0-dist-rep-rhevh-replicate-0: option background-self-heal-count using set value 0 >[2012-09-25 08:22:12.213310] D [options.c:1034:xlator_option_init_str] 0-dist-rep-rhevh-replicate-0: option data-self-heal using set value on >[2012-09-25 08:22:12.213326] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-0: option metadata-self-heal using set value on >[2012-09-25 08:22:12.213339] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-0: option entry-self-heal using set value on >[2012-09-25 08:22:12.213352] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-0: option self-heal-daemon using set value on >[2012-09-25 08:22:12.213365] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-0: option iam-self-heal-daemon using set value yes >[2012-09-25 08:22:12.213384] D [options.c:1042:xlator_option_init_bool] 0-dist-rep-rhevh-replicate-0: option eager-lock using set value enable >[2012-09-25 08:22:12.215777] D [options.c:1034:xlator_option_init_str] 0-dist-rep-rhevh-replicate-0: option node-uuid using set value b9d6cb21-051f-4791-9476-734856e77fbf >[2012-09-25 08:22:12.215804] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-3: lk-heal = off >[2012-09-25 08:22:12.215819] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-3: Client grace timeout value = 10 >[2012-09-25 08:22:12.215840] D [options.c:1044:xlator_option_init_path] 0-dist-rep-rhevh-client-3: option remote-subvolume using set value /disk1 >[2012-09-25 08:22:12.216537] D [rpc-clnt.c:973:rpc_clnt_connection_init] 0-dist-rep-rhevh-client-3: defaulting frame-timeout to 30mins >[2012-09-25 08:22:12.216564] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.3.0rhsvirt1/rpc-transport/socket.so >[2012-09-25 08:22:12.216604] D [rpc-clnt.c:1379:rpcclnt_cbk_program_register] 0-dist-rep-rhevh-client-3: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 >[2012-09-25 08:22:12.216620] D [client.c:2290:client_init_rpc] 0-dist-rep-rhevh-client-3: client init successful >[2012-09-25 08:22:12.216632] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-2: lk-heal = off >[2012-09-25 08:22:12.216644] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-2: Client grace timeout value = 10 >[2012-09-25 08:22:12.216659] D [options.c:1044:xlator_option_init_path] 0-dist-rep-rhevh-client-2: option remote-subvolume using set value /disk1 >[2012-09-25 08:22:12.217354] D [rpc-clnt.c:973:rpc_clnt_connection_init] 0-dist-rep-rhevh-client-2: defaulting frame-timeout to 30mins >[2012-09-25 08:22:12.217377] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.3.0rhsvirt1/rpc-transport/socket.so >[2012-09-25 08:22:12.217403] D [rpc-clnt.c:1379:rpcclnt_cbk_program_register] 0-dist-rep-rhevh-client-2: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 >[2012-09-25 08:22:12.217417] D [client.c:2290:client_init_rpc] 0-dist-rep-rhevh-client-2: client init successful >[2012-09-25 08:22:12.217430] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-1: lk-heal = off >[2012-09-25 08:22:12.217450] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-1: Client grace timeout value = 10 >[2012-09-25 08:22:12.217466] D [options.c:1044:xlator_option_init_path] 0-dist-rep-rhevh-client-1: option remote-subvolume using set value /disk1 >[2012-09-25 08:22:12.218152] D [rpc-clnt.c:973:rpc_clnt_connection_init] 0-dist-rep-rhevh-client-1: defaulting frame-timeout to 30mins >[2012-09-25 08:22:12.218173] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.3.0rhsvirt1/rpc-transport/socket.so >[2012-09-25 08:22:12.218198] D [rpc-clnt.c:1379:rpcclnt_cbk_program_register] 0-dist-rep-rhevh-client-1: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 >[2012-09-25 08:22:12.218212] D [client.c:2290:client_init_rpc] 0-dist-rep-rhevh-client-1: client init successful >[2012-09-25 08:22:12.218225] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-0: lk-heal = off >[2012-09-25 08:22:12.218244] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-0: Client grace timeout value = 10 >[2012-09-25 08:22:12.218259] D [options.c:1044:xlator_option_init_path] 0-dist-rep-rhevh-client-0: option remote-subvolume using set value /disk1 >[2012-09-25 08:22:12.218938] D [rpc-clnt.c:973:rpc_clnt_connection_init] 0-dist-rep-rhevh-client-0: defaulting frame-timeout to 30mins >[2012-09-25 08:22:12.218959] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.3.0rhsvirt1/rpc-transport/socket.so >[2012-09-25 08:22:12.218983] D [rpc-clnt.c:1379:rpcclnt_cbk_program_register] 0-dist-rep-rhevh-client-0: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 >[2012-09-25 08:22:12.219002] D [client.c:2290:client_init_rpc] 0-dist-rep-rhevh-client-0: client init successful >[2012-09-25 08:22:12.219039] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 08:22:12.219055] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-0: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:12.223499] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.30 (port-24007) for hostname: rhs-client6.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:12.223631] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 08:22:12.223662] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-1: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:12.227390] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.31 (port-24007) for hostname: rhs-client7.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:12.227500] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 08:22:12.227527] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-2: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:12.231401] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.32 (port-24007) for hostname: rhs-client8.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:12.231490] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >[2012-09-25 08:22:12.231516] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-3: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:12.235341] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.33 (port-24007) for hostname: rhs-client9.lab.eng.blr.redhat.com and port: 24007 >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume glustershd > 62: type debug/io-stats > 63: option log-level DEBUG > 64: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 65: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 08:22:12.235564] D [glusterfsd-mgmt.c:2095:glusterfs_mgmt_pmap_signin] 0-fsd-mgmt: portmapper signin arguments not given >[2012-09-25 08:22:12.235625] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-0: got RPC_CLNT_CONNECT >[2012-09-25 08:22:12.235691] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.235726] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-1: got RPC_CLNT_CONNECT >[2012-09-25 08:22:12.235776] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.235819] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-2: got RPC_CLNT_CONNECT >[2012-09-25 08:22:12.235888] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-2: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.235916] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-3: got RPC_CLNT_CONNECT >[2012-09-25 08:22:12.235950] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-3: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.236034] D [client-handshake.c:1670:server_has_portmap] 0-dist-rep-rhevh-client-0: detected portmapper on server >[2012-09-25 08:22:12.236114] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.236153] D [client-handshake.c:1670:server_has_portmap] 0-dist-rep-rhevh-client-1: detected portmapper on server >[2012-09-25 08:22:12.236201] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.236233] D [client-handshake.c:1670:server_has_portmap] 0-dist-rep-rhevh-client-2: detected portmapper on server >[2012-09-25 08:22:12.236280] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-2: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.236314] D [client-handshake.c:1670:server_has_portmap] 0-dist-rep-rhevh-client-3: detected portmapper on server >[2012-09-25 08:22:12.236351] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-3: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:12.236386] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-25 08:22:12.236434] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-25 08:22:12.236482] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-1: EOF from peer 10.70.36.31:24007 >[2012-09-25 08:22:12.236518] D [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24007) >[2012-09-25 08:22:12.236537] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:12.236562] D [client.c:2108:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected (skipped notify) >[2012-09-25 08:22:12.236578] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-0: EOF from peer 10.70.36.30:24007 >[2012-09-25 08:22:12.236590] D [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24007) >[2012-09-25 08:22:12.236602] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:12.236641] D [client.c:2108:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected (skipped notify) >[2012-09-25 08:22:12.236672] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-25 08:22:12.236704] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-2: EOF from peer 10.70.36.32:24007 >[2012-09-25 08:22:12.236720] D [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24007) >[2012-09-25 08:22:12.236732] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:12.236749] D [client.c:2108:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected (skipped notify) >[2012-09-25 08:22:12.236769] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-25 08:22:12.236794] D [socket.c:184:__socket_rwv] 0-dist-rep-rhevh-client-3: EOF from peer 10.70.36.33:24007 >[2012-09-25 08:22:12.236814] D [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24007) >[2012-09-25 08:22:12.236833] D [socket.c:1798:socket_event_handler] 0-transport: disconnecting now >[2012-09-25 08:22:12.236877] D [client.c:2108:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected (skipped notify) >[2012-09-25 08:22:15.944498] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 08:22:15.944557] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 08:22:16.209000] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-1: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:16.213242] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.31 (port-24007) for hostname: rhs-client7.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:16.213385] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-0: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:16.213421] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-1: got RPC_CLNT_CONNECT >[2012-09-25 08:22:16.213531] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.213688] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 08:22:16.213794] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.214104] D [client-handshake.c:1407:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: clnt-lk-version = 1, server-lk-version = 0 >[2012-09-25 08:22:16.214140] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-25 08:22:16.214154] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 08:22:16.214166] D [client-handshake.c:1295:client_post_handshake] 0-dist-rep-rhevh-client-1: No fds to open - notifying all parents child up >[2012-09-25 08:22:16.214180] D [client-handshake.c:489:client_set_lk_version] 0-dist-rep-rhevh-client-1: Sending SET_LK_VERSION >[2012-09-25 08:22:16.214262] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-25 08:22:16.214402] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-25 08:22:16.214816] D [afr-self-heald.c:986:afr_find_child_position] 0-dist-rep-rhevh-replicate-0: child dist-rep-rhevh-client-1 is local >[2012-09-25 08:22:16.214868] D [afr-self-heald.c:1135:afr_start_crawl] 0-dist-rep-rhevh-replicate-0: starting crawl 1 for dist-rep-rhevh-client-1 >[2012-09-25 08:22:16.215656] D [afr-self-heald.c:902:_crawl_directory] 0-dist-rep-rhevh-replicate-0: crawling INDEX 688ed194-f416-4be0-88fc-3a378b36f3c5 >[2012-09-25 08:22:16.216206] D [afr-self-heald.c:365:_self_heal_entry] 0-dist-rep-rhevh-replicate-0: lookup <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.216275] D [afr-common.c:132:afr_lookup_xattr_req_prepare] 0-dist-rep-rhevh-replicate-0: <gfid:205109d1-11f1-4dce-8858-88f860b4c882>: failed to get the gfid from dict >[2012-09-25 08:22:16.216728] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 0 0 ] >[2012-09-25 08:22:16.216761] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 3 2 ] >[2012-09-25 08:22:16.216775] D [afr-self-heal-common.c:829:afr_mark_sources] 0-dist-rep-rhevh-replicate-0: Number of sources: 1 >[2012-09-25 08:22:16.216789] D [afr-self-heal-data.c:861:afr_lookup_select_read_child_by_txn_type] 0-dist-rep-rhevh-replicate-0: returning read_child: 1 >[2012-09-25 08:22:16.216801] D [afr-common.c:1294:afr_lookup_select_read_child] 0-dist-rep-rhevh-replicate-0: Source selected as 1 for <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.216823] D [afr-common.c:1097:afr_lookup_build_response_params] 0-dist-rep-rhevh-replicate-0: Building lookup response from 1 >[2012-09-25 08:22:16.216839] D [afr-common.c:1164:afr_lookup_set_self_heal_params_by_xattr] 0-dist-rep-rhevh-replicate-0: data self-heal is pending for <gfid:205109d1-11f1-4dce-8858-88f860b4c882>. >[2012-09-25 08:22:16.216883] D [afr-common.c:1340:afr_launch_self_heal] 0-dist-rep-rhevh-replicate-0: background data self-heal triggered. path: <gfid:205109d1-11f1-4dce-8858-88f860b4c882>, reason: lookup detected pending operations >[2012-09-25 08:22:16.216917] D [afr-self-heal-metadata.c:69:afr_sh_metadata_done] 0-dist-rep-rhevh-replicate-0: proceeding to data check on <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.216961] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-dist-rep-rhevh-replicate-0: open of <gfid:205109d1-11f1-4dce-8858-88f860b4c882> failed on child dist-rep-rhevh-client-0 (Transport endpoint is not connected) >[2012-09-25 08:22:16.217293] D [afr-self-heal-data.c:340:afr_sh_data_fail] 0-dist-rep-rhevh-replicate-0: finishing failed data selfheal of <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.217335] D [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-dist-rep-rhevh-replicate-0: background data self-heal failed on <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.217356] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 0 0 ] >[2012-09-25 08:22:16.217369] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-dist-rep-rhevh-replicate-0: pending_matrix: [ 3 2 ] >[2012-09-25 08:22:16.217381] D [afr-self-heal-common.c:829:afr_mark_sources] 0-dist-rep-rhevh-replicate-0: Number of sources: 1 >[2012-09-25 08:22:16.217393] D [afr-self-heal-data.c:861:afr_lookup_select_read_child_by_txn_type] 0-dist-rep-rhevh-replicate-0: returning read_child: 1 >[2012-09-25 08:22:16.217404] D [afr-common.c:1294:afr_lookup_select_read_child] 0-dist-rep-rhevh-replicate-0: Source selected as 1 for <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:22:16.217418] D [afr-common.c:1097:afr_lookup_build_response_params] 0-dist-rep-rhevh-replicate-0: Building lookup response from 1 >[2012-09-25 08:22:16.217485] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.30 (port-24007) for hostname: rhs-client6.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:16.217536] D [client3_1-fops.c:2790:client_fdctx_destroy] 0-dist-rep-rhevh-client-1: sending release on fd >[2012-09-25 08:22:16.217576] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-2: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:16.217706] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-0: got RPC_CLNT_CONNECT >[2012-09-25 08:22:16.217781] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.217946] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 08:22:16.218025] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.218104] D [afr-self-heald.c:1040:afr_dir_crawl] 0-dist-rep-rhevh-replicate-0: Crawl completed on dist-rep-rhevh-client-1 >[2012-09-25 08:22:16.218320] D [client-handshake.c:1407:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: clnt-lk-version = 1, server-lk-version = 0 >[2012-09-25 08:22:16.218362] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-25 08:22:16.218376] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 08:22:16.218387] D [client-handshake.c:1295:client_post_handshake] 0-dist-rep-rhevh-client-0: No fds to open - notifying all parents child up >[2012-09-25 08:22:16.218402] D [client-handshake.c:489:client_set_lk_version] 0-dist-rep-rhevh-client-0: Sending SET_LK_VERSION >[2012-09-25 08:22:16.218586] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-25 08:22:16.218843] D [afr-self-heald.c:986:afr_find_child_position] 0-dist-rep-rhevh-replicate-0: child dist-rep-rhevh-client-0 is remote >[2012-09-25 08:22:16.223368] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.32 (port-24007) for hostname: rhs-client8.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:16.223479] D [name.c:149:client_fill_address_family] 0-dist-rep-rhevh-client-3: address-family not specified, guessing it to be inet/inet6 >[2012-09-25 08:22:16.223607] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-2: got RPC_CLNT_CONNECT >[2012-09-25 08:22:16.223690] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-2: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.223838] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 08:22:16.223902] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-2: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.224204] D [client-handshake.c:1407:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: clnt-lk-version = 1, server-lk-version = 0 >[2012-09-25 08:22:16.224236] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-25 08:22:16.224250] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 08:22:16.224262] D [client-handshake.c:1295:client_post_handshake] 0-dist-rep-rhevh-client-2: No fds to open - notifying all parents child up >[2012-09-25 08:22:16.224276] D [client-handshake.c:489:client_set_lk_version] 0-dist-rep-rhevh-client-2: Sending SET_LK_VERSION >[2012-09-25 08:22:16.224332] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-25 08:22:16.225965] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-25 08:22:16.226329] D [afr-self-heald.c:986:afr_find_child_position] 0-dist-rep-rhevh-replicate-1: child dist-rep-rhevh-client-2 is remote >[2012-09-25 08:22:16.227636] D [common-utils.c:151:gf_resolve_ip6] 0-resolver: returning ip-10.70.36.33 (port-24007) for hostname: rhs-client9.lab.eng.blr.redhat.com and port: 24007 >[2012-09-25 08:22:16.227863] D [client.c:2043:client_rpc_notify] 0-dist-rep-rhevh-client-3: got RPC_CLNT_CONNECT >[2012-09-25 08:22:16.227917] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-3: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.228134] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 08:22:16.228189] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-3: returning as transport is already disconnected OR there are no frames (1 || 1) >[2012-09-25 08:22:16.228512] D [client-handshake.c:1407:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: clnt-lk-version = 1, server-lk-version = 0 >[2012-09-25 08:22:16.228533] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-25 08:22:16.228545] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 08:22:16.228556] D [client-handshake.c:1295:client_post_handshake] 0-dist-rep-rhevh-client-3: No fds to open - notifying all parents child up >[2012-09-25 08:22:16.228570] D [client-handshake.c:489:client_set_lk_version] 0-dist-rep-rhevh-client-3: Sending SET_LK_VERSION >[2012-09-25 08:22:16.229148] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-25 08:22:16.229486] D [afr-self-heald.c:986:afr_find_child_position] 0-dist-rep-rhevh-replicate-1: child dist-rep-rhevh-client-3 is remote >[2012-09-25 08:22:58.232144] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:22:58.232194] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:22:58.232222] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:22:58.232236] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-3: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:32:16.290681] D [afr-self-heald.c:1135:afr_start_crawl] 0-dist-rep-rhevh-replicate-0: starting crawl 1 for dist-rep-rhevh-client-1 >[2012-09-25 08:32:16.297811] D [afr-self-heald.c:902:_crawl_directory] 0-dist-rep-rhevh-replicate-0: crawling INDEX 688ed194-f416-4be0-88fc-3a378b36f3c5 >[2012-09-25 08:32:16.298382] D [afr-self-heald.c:365:_self_heal_entry] 0-dist-rep-rhevh-replicate-0: lookup <gfid:205109d1-11f1-4dce-8858-88f860b4c882> >[2012-09-25 08:32:16.298470] D [afr-common.c:132:afr_lookup_xattr_req_prepare] 0-dist-rep-rhevh-replicate-0: <gfid:205109d1-11f1-4dce-8858-88f860b4c882>: failed to get the gfid from dict >[2012-09-25 08:32:16.305251] D [afr-self-heald.c:282:_remove_stale_index] 0-dist-rep-rhevh-replicate-0: Removing stale index for 205109d1-11f1-4dce-8858-88f860b4c882 on dist-rep-rhevh-client-1 >[2012-09-25 08:32:16.306290] D [afr-self-heald.c:1040:afr_dir_crawl] 0-dist-rep-rhevh-replicate-0: Crawl completed on dist-rep-rhevh-client-1 >[2012-09-25 08:32:59.295322] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:32:59.295386] D [client-handshake.c:184:client_start_ping] 0-dist-rep-rhevh-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) >[2012-09-25 08:33:05.478045] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:33:05.485078] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 08:33:05.486577] D [glusterfsd-mgmt.c:1444:is_graph_topology_equal] 0-glusterfsd-mgmt: graphs are equal >[2012-09-25 08:33:05.486603] D [glusterfsd-mgmt.c:1498:glusterfs_volfile_reconfigure] 0-glusterfsd-mgmt: Only options have changed in the new graph >[2012-09-25 08:33:05.486636] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-0: lk-heal = off >[2012-09-25 08:33:05.486651] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-0: Client grace timeout value = 10 >[2012-09-25 08:33:05.486663] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-client-0: reconfigured >[2012-09-25 08:33:05.486680] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-1: lk-heal = off >[2012-09-25 08:33:05.486692] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-1: Client grace timeout value = 10 >[2012-09-25 08:33:05.486704] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-client-1: reconfigured >[2012-09-25 08:33:05.486718] D [options.c:1051:xlator_option_reconf_uint32] 0-dist-rep-rhevh-replicate-0: option background-self-heal-count using set value 0 >[2012-09-25 08:33:05.486734] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-0: option metadata-self-heal using set value on >[2012-09-25 08:33:05.486749] D [options.c:1048:xlator_option_reconf_str] 0-dist-rep-rhevh-replicate-0: option data-self-heal using set value on >[2012-09-25 08:33:05.486763] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-0: option entry-self-heal using set value on >[2012-09-25 08:33:05.486786] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-0: option self-heal-daemon using set value on >[2012-09-25 08:33:05.486801] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-0: option eager-lock using set value enable >[2012-09-25 08:33:05.486827] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-replicate-0: reconfigured >[2012-09-25 08:33:05.486847] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-2: lk-heal = off >[2012-09-25 08:33:05.486867] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-2: Client grace timeout value = 10 >[2012-09-25 08:33:05.486880] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-client-2: reconfigured >[2012-09-25 08:33:05.486895] D [client.c:2315:client_init_grace_timer] 0-dist-rep-rhevh-client-3: lk-heal = off >[2012-09-25 08:33:05.486906] D [client.c:2326:client_init_grace_timer] 0-dist-rep-rhevh-client-3: Client grace timeout value = 10 >[2012-09-25 08:33:05.486917] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-client-3: reconfigured >[2012-09-25 08:33:05.486928] D [options.c:1051:xlator_option_reconf_uint32] 0-dist-rep-rhevh-replicate-1: option background-self-heal-count using set value 0 >[2012-09-25 08:33:05.486942] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-1: option metadata-self-heal using set value on >[2012-09-25 08:33:05.486954] D [options.c:1048:xlator_option_reconf_str] 0-dist-rep-rhevh-replicate-1: option data-self-heal using set value on >[2012-09-25 08:33:05.486967] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-1: option entry-self-heal using set value on >[2012-09-25 08:33:05.486986] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-1: option self-heal-daemon using set value on >[2012-09-25 08:33:05.486999] D [options.c:1056:xlator_option_reconf_bool] 0-dist-rep-rhevh-replicate-1: option eager-lock using set value enable >[2012-09-25 08:33:05.487015] D [options.c:925:xlator_reconfigure_rec] 0-dist-rep-rhevh-replicate-1: reconfigured >[2012-09-25 08:33:05.487032] D [options.c:1048:xlator_option_reconf_str] 0-glustershd: option log-level using set value ERROR >[2012-09-25 09:31:35.003106] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24011) >[2012-09-25 09:31:42.997090] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-25 09:31:42.997175] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 09:31:47.519894] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (::1:24007) >[2012-09-25 09:31:47.743736] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-25 09:31:50.615497] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 09:31:51.621459] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 09:31:51.636237] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 09:31:51.636276] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume glustershd > 62: type debug/io-stats > 63: option log-level WARNING > 64: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 65: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 09:31:55.471074] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-0: failed to get the port number for remote subvolume >[2012-09-25 09:31:55.581878] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 09:31:55.581923] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 09:31:55.639746] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-25 09:44:36.430828] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 09:45:59.171834] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 09:45:59.257671] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 09:45:59.257707] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume glustershd > 62: type debug/io-stats > 63: option log-level WARNING > 64: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 65: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 09:45:59.291824] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-1: failed to get the port number for remote subvolume >[2012-09-25 09:46:03.081310] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 09:46:03.081362] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 09:46:03.082756] E [socket.c:1715:socket_connect_finish] 0-dist-rep-rhevh-client-0: connection to failed (Connection refused) >[2012-09-25 09:46:03.082805] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 09:46:06.233867] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-25 09:46:07.423738] E [client-handshake.c:1717:client_query_portmap_cbk] 0-dist-rep-rhevh-client-0: failed to get the port number for remote subvolume >[2012-09-25 16:55:07.529148] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection timed out) >[2012-09-25 16:55:07.529199] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection timed out), peer (10.70.36.33:24011) >[2012-09-25 16:55:38.620093] E [socket.c:1715:socket_connect_finish] 0-dist-rep-rhevh-client-3: connection to 10.70.36.33:24011 failed (Connection timed out) >[2012-09-25 17:01:19.264249] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24011) >[2012-09-25 17:01:21.692435] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-25 17:01:21.692632] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-25 17:01:21.692692] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 17:01:21.692768] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24011) >[2012-09-25 17:01:21.692813] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-25 17:01:22.869856] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 17:01:30.028966] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 17:01:30.039026] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 17:01:30.039056] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume glustershd > 62: type debug/io-stats > 63: option log-level WARNING > 64: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 65: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 17:01:33.588596] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 17:01:33.588647] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 19:13:45.865644] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-25 19:13:46.871790] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-25 19:13:46.907843] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 19:13:46.907879] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 19:13:46.907897] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 19:13:46.907911] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 19:13:46.907925] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-25 19:13:46.920106] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-25 19:13:46.920181] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-25 19:13:46.920210] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 19:13:46.934070] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-25 19:13:46.937862] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-25 19:13:46.941967] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >[2012-09-25 19:13:46.945966] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-25 19:13:46.949825] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume replicate-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk2 > 5: option transport-type tcp > 6: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 7: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 8: end-volume > 9: > 10: volume replicate-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk2 > 14: option transport-type tcp > 15: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 16: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 17: end-volume > 18: > 19: volume replicate-rhevh-replicate-0 > 20: type cluster/replicate > 21: option background-self-heal-count 0 > 22: option metadata-self-heal on > 23: option data-self-heal on > 24: option entry-self-heal on > 25: option self-heal-daemon on > 26: option iam-self-heal-daemon yes > 27: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 28: end-volume > 29: > 30: volume dist-rep-rhevh-client-0 > 31: type protocol/client > 32: option remote-host rhs-client6.lab.eng.blr.redhat.com > 33: option remote-subvolume /disk1 > 34: option transport-type tcp > 35: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 36: option password 1233f788-e862-447a-9353-3a50d84656ca > 37: end-volume > 38: > 39: volume dist-rep-rhevh-client-1 > 40: type protocol/client > 41: option remote-host rhs-client7.lab.eng.blr.redhat.com > 42: option remote-subvolume /disk1 > 43: option transport-type tcp > 44: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 45: option password 1233f788-e862-447a-9353-3a50d84656ca > 46: end-volume > 47: > 48: volume dist-rep-rhevh-client-2 > 49: type protocol/client > 50: option remote-host rhs-client8.lab.eng.blr.redhat.com > 51: option remote-subvolume /disk1 > 52: option transport-type tcp > 53: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 54: option password 1233f788-e862-447a-9353-3a50d84656ca > 55: end-volume > 56: > 57: volume dist-rep-rhevh-client-3 > 58: type protocol/client > 59: option remote-host rhs-client9.lab.eng.blr.redhat.com > 60: option remote-subvolume /disk1 > 61: option transport-type tcp > 62: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 63: option password 1233f788-e862-447a-9353-3a50d84656ca > 64: end-volume > 65: > 66: volume dist-rep-rhevh-replicate-0 > 67: type cluster/replicate > 68: option background-self-heal-count 0 > 69: option metadata-self-heal on > 70: option data-self-heal on > 71: option entry-self-heal on > 72: option self-heal-daemon on > 73: option iam-self-heal-daemon yes > 74: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 75: end-volume > 76: > 77: volume dist-rep-rhevh-replicate-1 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 replicate-rhevh-replicate-0 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-25 19:13:46.954426] E [client-handshake.c:1717:client_query_portmap_cbk] 0-replicate-rhevh-client-1: failed to get the port number for remote subvolume >[2012-09-25 19:13:46.954500] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-25 19:13:46.954598] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-1: disconnected >[2012-09-25 19:13:46.954639] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-25 19:13:46.954676] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-25 19:13:46.954710] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-25 19:13:46.954767] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-25 19:13:50.633910] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-25 19:13:50.633965] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-25 19:13:50.887278] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-25 19:13:50.891187] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:50.891483] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-25 19:13:50.891518] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:50.891587] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-25 19:13:50.891691] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-25 19:13:50.895214] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:50.895558] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-25 19:13:50.895594] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:50.895833] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-25 19:13:50.899348] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:50.899645] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-25 19:13:50.899670] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:50.899719] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-3' came back up; going online. >[2012-09-25 19:13:50.901283] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-25 19:13:50.904079] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:50.904429] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-25 19:13:50.904454] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:50.905014] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-25 19:13:50.908442] I [client-handshake.c:1636:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:50.908795] I [client-handshake.c:1433:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-25 19:13:50.908819] I [client-handshake.c:1445:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:50.908865] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-25 19:13:50.909405] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-25 19:13:53.913395] I [client-handshake.c:1636:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-25 19:13:53.913755] I [client-handshake.c:1433:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-25 19:13:53.913782] I [client-handshake.c:1445:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-25 19:13:53.914358] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-25 19:25:39.841403] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 19:25:39.850558] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-25 19:25:46.715351] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 19:25:47.752040] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-25 19:25:47.754417] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-26 10:58:13.301739] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24011) >[2012-09-26 10:58:13.301817] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-26 10:58:23.809047] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 10:58:23.809440] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 10:58:23.809466] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 10:58:23.810201] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:21:21.300044] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-26 11:21:21.300145] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-26 11:21:31.970755] I [client-handshake.c:1636:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:21:31.971205] I [client-handshake.c:1433:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 11:21:31.971239] I [client-handshake.c:1445:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:21:31.971949] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 11:22:56.551289] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (::1:24007) >[2012-09-26 11:22:59.463781] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24012) >[2012-09-26 11:22:59.463885] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-1: disconnected >[2012-09-26 11:22:59.464763] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-26 11:22:59.464825] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-26 11:23:02.846486] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 11:23:03.851111] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 11:23:03.865550] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:23:03.865592] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:23:03.865609] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:23:03.865623] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:23:03.865646] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:23:03.877991] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 11:23:03.878085] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 11:23:03.878124] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:23:03.882432] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:23:03.886529] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:23:03.890575] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:23:03.894662] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 11:23:03.898731] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option eager-lock enable > 85: option iam-self-heal-daemon yes > 86: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 87: end-volume > 88: > 89: volume glustershd > 90: type debug/io-stats > 91: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 92: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 11:23:03.903463] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 11:23:03.903535] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 11:23:03.903701] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 11:23:03.903756] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 11:23:03.903845] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 11:23:03.903893] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 11:23:07.791297] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 11:23:07.791356] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 11:23:07.866561] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.866958] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 11:23:07.866989] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.867072] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-26 11:23:07.867256] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:23:07.872994] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.873300] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 11:23:07.873334] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.873474] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:23:07.879527] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.879875] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 11:23:07.879921] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.879990] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 11:23:07.881586] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:23:07.885530] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.885896] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 11:23:07.885928] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.886546] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:23:07.889746] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.890164] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 11:23:07.890198] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.890254] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-26 11:23:07.890795] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 11:23:07.895384] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:23:07.895775] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 11:23:07.895799] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:23:07.896350] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:23:52.747040] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24012) >[2012-09-26 11:23:52.747129] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-0: disconnected >[2012-09-26 11:23:52.748144] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24011) >[2012-09-26 11:23:52.748229] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-26 11:24:02.907476] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:24:02.907945] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 11:24:02.907978] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:24:02.908599] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:24:02.912608] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:24:02.913083] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 11:24:02.913111] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:24:02.914013] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:31:39.812964] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.33:24011) >[2012-09-26 11:31:39.813062] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-26 11:31:49.501043] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-26 11:31:49.501121] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-26 11:31:49.501145] E [afr-common.c:3668:afr_notify] 0-dist-rep-rhevh-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-26 11:31:49.970190] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:31:49.970507] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 11:31:49.970533] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:31:49.970582] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-3' came back up; going online. >[2012-09-26 11:31:49.971433] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:31:52.373524] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (::1:24007) >[2012-09-26 11:31:52.589381] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24012) >[2012-09-26 11:31:52.589476] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-1: disconnected >[2012-09-26 11:31:52.589679] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-26 11:31:52.589752] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-26 11:31:55.454647] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 11:31:56.460673] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 11:31:56.475847] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:31:56.475888] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:31:56.475904] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:31:56.475918] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:31:56.475933] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:31:56.488402] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 11:31:56.488485] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 11:31:56.488525] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:31:56.492732] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:31:56.496705] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:31:56.501851] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:31:56.505889] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 11:31:56.509960] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option eager-lock enable > 85: option iam-self-heal-daemon yes > 86: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 87: end-volume > 88: > 89: volume glustershd > 90: type debug/io-stats > 91: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 92: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 11:31:56.514493] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 11:31:56.514569] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 11:31:56.514875] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 11:31:56.514931] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 11:32:00.299464] E [client-handshake.c:1695:client_query_portmap_cbk] 0-replicate-rhevh-client-0: failed to get the port number for remote subvolume >[2012-09-26 11:32:00.299564] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-0: disconnected >[2012-09-26 11:32:00.299604] E [client-handshake.c:1695:client_query_portmap_cbk] 0-dist-rep-rhevh-client-0: failed to get the port number for remote subvolume >[2012-09-26 11:32:00.299659] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-26 11:32:00.412522] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 11:32:00.412563] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 11:32:00.477884] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:00.478251] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 11:32:00.478325] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:00.478417] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-1' came back up; going online. >[2012-09-26 11:32:00.478534] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:32:00.479129] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 11:32:00.482135] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:00.482384] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:00.482416] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:00.482491] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 11:32:00.482612] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:32:00.483109] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 11:32:00.486468] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:00.486846] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:00.486880] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:00.486958] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-3' came back up; going online. >[2012-09-26 11:32:00.488909] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:32:00.491044] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:00.491372] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:00.491400] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:00.492005] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 11:32:03.495690] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 11:32:03.499653] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 11:32:06.503987] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:06.504379] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 11:32:06.504404] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:06.504974] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:32:07.507900] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:07.508385] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:07.508419] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:07.508995] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:32:19.184491] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 11:32:20.190573] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 11:32:20.201124] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:32:20.201161] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:32:20.201178] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:32:20.201192] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:32:20.201216] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:32:20.213717] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 11:32:20.213781] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 11:32:20.213822] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:32:20.218258] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:32:20.222766] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:32:20.227013] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:32:20.231125] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 11:32:20.235142] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option eager-lock enable > 85: option iam-self-heal-daemon yes > 86: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 87: end-volume > 88: > 89: volume glustershd > 90: type debug/io-stats > 91: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 92: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 11:32:20.239723] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 11:32:20.239887] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 11:32:20.239948] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 11:32:20.240001] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 11:32:20.240045] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 11:32:20.240122] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 11:32:23.416082] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 11:32:23.416138] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 11:32:24.203924] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.204335] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 11:32:24.204369] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.204447] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-26 11:32:24.204756] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:32:24.207959] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.208313] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 11:32:24.208349] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.208518] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:32:24.211935] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.212164] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:24.212208] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.212279] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 11:32:24.213903] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:32:24.215199] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-dist-rep-rhevh-replicate-0: open of <gfid:700b5d64-2a08-4376-ba9d-5250d2f8d9d7> failed on child dist-rep-rhevh-client-0 (Transport endpoint is not connected) >[2012-09-26 11:32:24.216016] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.216381] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:24.216416] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.216492] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-26 11:32:24.216749] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 11:32:24.220634] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.221038] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:24.221065] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.222987] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:32:24.223668] W [client3_1-fops.c:2141:client3_1_rchecksum_cbk] 0-replicate-rhevh-client-1: remote operation failed: Invalid argument >[2012-09-26 11:32:24.223700] E [afr-self-heal-algorithm.c:609:sh_diff_checksum_cbk] 0-replicate-rhevh-replicate-0: checksum on <gfid:72ec447b-a6f0-4e99-935f-f381c6003587> failed on subvolume replicate-rhevh-client-1 (Invalid argument) >[2012-09-26 11:32:24.225211] W [client3_1-fops.c:2141:client3_1_rchecksum_cbk] 0-replicate-rhevh-client-0: remote operation failed: Invalid argument >[2012-09-26 11:32:24.225236] E [afr-self-heal-algorithm.c:609:sh_diff_checksum_cbk] 0-replicate-rhevh-replicate-0: checksum on <gfid:72ec447b-a6f0-4e99-935f-f381c6003587> failed on subvolume replicate-rhevh-client-0 (Invalid argument) >[2012-09-26 11:32:24.225482] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:32:24.225778] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 11:32:24.225804] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:32:24.226615] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:34:00.541879] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 11:34:01.547844] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 11:34:01.558262] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:34:01.558303] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:34:01.558328] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:34:01.558343] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:34:01.558356] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 11:34:01.571312] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 11:34:01.571352] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 11:34:01.571390] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:34:01.575979] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:34:01.580058] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 11:34:01.583989] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 11:34:01.588166] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 11:34:01.592183] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option eager-lock enable > 85: option iam-self-heal-daemon yes > 86: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 87: end-volume > 88: > 89: volume glustershd > 90: type debug/io-stats > 91: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 92: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 11:34:01.597097] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 11:34:01.597186] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 11:34:01.597326] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 11:34:01.597390] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 11:34:01.597466] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 11:34:01.597530] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 11:34:05.431358] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 11:34:05.431400] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 11:34:05.560782] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.561045] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 11:34:05.561075] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.561151] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 11:34:05.561265] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:34:05.563029] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-dist-rep-rhevh-replicate-0: open of <gfid:700b5d64-2a08-4376-ba9d-5250d2f8d9d7> failed on child dist-rep-rhevh-client-0 (Transport endpoint is not connected) >[2012-09-26 11:34:05.564799] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.565054] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 11:34:05.565084] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.565139] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-1' came back up; going online. >[2012-09-26 11:34:05.565248] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 11:34:05.569112] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.569438] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 11:34:05.569471] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.569544] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-26 11:34:05.571483] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 11:34:05.573689] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.573948] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 11:34:05.573992] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.574635] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:34:05.579519] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.579875] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 11:34:05.579907] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.580524] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 11:34:05.583496] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 11:34:05.583734] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 11:34:05.583758] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 11:34:05.583938] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 11:42:22.655214] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:42:23.689436] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:42:23.692798] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-26 11:44:05.673837] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-dist-rep-rhevh-replicate-0: Unable to self-heal contents of '<gfid:700b5d64-2a08-4376-ba9d-5250d2f8d9d7>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-26 11:45:38.366448] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:45:39.396022] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:45:39.398474] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-26 11:54:05.756113] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-dist-rep-rhevh-replicate-0: Unable to self-heal contents of '<gfid:700b5d64-2a08-4376-ba9d-5250d2f8d9d7>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-26 11:54:05.758228] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-rhevh-replicate-0: no active sinks for performing self-heal on file <gfid:fb11285c-fc6d-465f-b231-f144792edaed> >[2012-09-26 11:54:09.909674] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:54:09.920874] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-26 11:54:16.876718] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:54:17.899305] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-26 11:54:17.901269] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-26 12:03:32.953838] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 12:03:33.959941] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 12:03:33.971505] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 12:03:33.971543] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 12:03:33.971562] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 12:03:33.971577] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 12:03:33.971590] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 12:03:33.984470] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 12:03:33.984549] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 12:03:33.984590] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 12:03:33.989143] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 12:03:33.993213] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 12:03:33.997110] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 12:03:34.001050] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 12:03:34.005231] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 12:03:34.009599] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 12:03:34.009738] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 12:03:34.009792] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 12:03:34.009905] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 12:03:34.009946] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 12:03:34.010080] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 12:03:37.673888] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 12:03:37.673941] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 12:03:37.975199] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:37.975483] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 12:03:37.975516] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:37.975585] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-1' came back up; going online. >[2012-09-26 12:03:37.975673] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 12:03:37.981508] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:37.981879] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 12:03:37.981908] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:37.982156] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 12:03:37.988052] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:37.988401] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 12:03:37.988447] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:37.988521] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 12:03:37.990148] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 12:03:37.991960] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-dist-rep-rhevh-replicate-0: open of <gfid:700b5d64-2a08-4376-ba9d-5250d2f8d9d7> failed on child dist-rep-rhevh-client-0 (Transport endpoint is not connected) >[2012-09-26 12:03:37.992894] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-dist-rep-rhevh-replicate-0: open of <gfid:fb11285c-fc6d-465f-b231-f144792edaed> failed on child dist-rep-rhevh-client-0 (Transport endpoint is not connected) >[2012-09-26 12:03:37.994679] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:37.995057] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 12:03:37.995082] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:37.995141] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-26 12:03:37.995967] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 12:03:38.000643] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:38.000977] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 12:03:38.001011] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:38.001894] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 12:03:38.004780] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 12:03:38.005162] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 12:03:38.005196] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 12:03:38.006077] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 16:02:35.982118] W [socket.c:195:__socket_rwv] 0-replicate-rhevh-client-0: readv failed (Connection timed out) >[2012-09-26 16:02:35.995690] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-0: reading from socket failed. Error (Connection timed out), peer (10.70.36.30:24012) >[2012-09-26 16:02:35.995749] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-0: disconnected >[2012-09-26 16:02:36.531186] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-0: readv failed (Connection timed out) >[2012-09-26 16:02:36.531229] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Connection timed out), peer (10.70.36.30:24011) >[2012-09-26 16:02:36.531261] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-26 16:02:47.875125] E [socket.c:1715:socket_connect_finish] 0-dist-rep-rhevh-client-0: connection to 10.70.36.30:24011 failed (No route to host) >[2012-09-26 16:02:55.996143] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-2: readv failed (Connection timed out) >[2012-09-26 16:02:55.996192] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Connection timed out), peer (10.70.36.32:24011) >[2012-09-26 16:02:55.996226] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-26 16:03:07.139094] E [socket.c:1715:socket_connect_finish] 0-replicate-rhevh-client-0: connection to 10.70.36.30:24012 failed (Connection timed out) >[2012-09-26 16:03:07.821070] E [socket.c:1715:socket_connect_finish] 0-dist-rep-rhevh-client-2: connection to 10.70.36.32:24011 failed (No route to host) >[2012-09-26 16:03:40.224183] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:03:40.225693] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:13:40.300524] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:13:40.302096] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:23:40.335594] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:23:40.336343] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:33:40.382006] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:33:40.383600] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:43:40.452524] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:43:40.454180] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-26 16:45:14.621240] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 16:45:14.621662] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 16:45:14.621698] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 16:45:14.622577] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 16:45:15.625429] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 16:45:15.625838] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 16:45:15.625863] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 16:45:15.626753] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 16:46:50.835220] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:784b8e28-1746-4c9c-b9d2-8e91a3c0ffd8> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.835559] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:32a82617-0f8a-4539-8cd7-3df296f7058f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.835788] E [afr-self-heald.c:685:_link_inode_update_loc] 0-dist-rep-rhevh-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:46:50.836286] E [afr-self-heald.c:685:_link_inode_update_loc] 0-dist-rep-rhevh-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:46:50.836783] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:382ba545-6c58-4ff7-a1fc-62bf6b95b5a9> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.837016] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:01760070-45f6-4883-b8a4-0e50c6db1a8e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.837319] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:871a0603-e18f-4407-ad16-e0911acde1d4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.837580] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4e6eae97-360f-4660-94d5-7006620be7cc> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.837817] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a4b75ab5-3e63-4006-bfbb-fdbb6c16c353> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.838065] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e364e3c6-48aa-442c-b08a-949758c03323> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.838293] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:7fb8a645-148e-451c-8cc8-fe99b5cebe91> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.838514] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:ea9970d3-ff6f-480e-bb3b-905842539028> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.838731] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:d4035238-403b-47be-8a28-242a4b9e5d2d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.838947] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0d1c9935-d7d0-4880-9034-6ee1f63587a4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.839163] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:91fbcb48-d222-4d4a-a7c1-a373c343fffc> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.839377] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:3079f8de-0293-4b72-8b55-f16dd30009ee> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.839595] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:27dccc32-bd3d-4b30-9359-c10eacf0753f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.839795] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a29e0c10-367f-42df-b8e8-ac1de9f22e03> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.839993] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:bd8235e0-83d8-42f3-a2c0-1ddde4719813> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.840190] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:3b911d40-ac4c-4154-8aa3-5d58114f9c1d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.840392] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:f94f703f-ec42-4889-92ea-7a64f1b4f6ba> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.840584] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0a70837c-3fd4-4a20-bd2d-ca433d14dde7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.840778] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e89ebe5c-57d8-480f-8d30-8c9c9f8a0c6d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.840967] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4c6165c4-0934-4b42-a893-eb9c66bf590b> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.841158] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:eff1fa56-8d83-4ca2-a0e0-8dca15d2dfcb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.841361] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:cd7eb4e0-f14e-447d-bf4d-c4b27095fa27> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.841545] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6905986c-bd8c-48b0-8bf9-5693d71e2ffb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.841739] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:23cde0cc-f0f2-4183-9ca0-9ef500d3a971> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.841938] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:7e81eddc-4d59-4a53-ae1e-6009032b3d4c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.842138] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:b2c46935-773d-4077-a735-5ee444002fa3> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.842335] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:04c7442a-4029-47fa-84fe-b44f1cbcd823> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.842527] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:5a6ed272-6813-455d-8bfb-3a1c7c5026e8> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.842718] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:411a3f2d-3208-4eff-9418-824e4f8102fe> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.842908] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:346723c5-2814-4b34-8f49-1aed20c1a66e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.843093] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:dbd5a06f-a5cf-44fd-abfc-d27f99c649ce> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.843300] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:287c07c6-8c54-4529-be01-f28d161e78f3> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.843502] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:b52fbab3-206f-4bc7-9e7d-3ef78ec7b801> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.843712] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e64d725d-70b8-4c38-b929-212dda6ddadd> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.843907] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:008c2e9c-3fc4-4a9a-ac56-b6afa7b48f76> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.844102] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6103b85a-dddb-4928-8040-9c76aa53a47f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.844309] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:dbff056d-cb7e-4921-b1f4-22c58a43a9d7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.844509] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:1d562a23-5d81-4323-b9c0-b2c4d053b9c7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.844703] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:2064ef58-56d9-473a-a36d-f961df2dcd04> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.844902] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:583551af-edc1-4ef8-9960-fde6d511b2c1> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.845090] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:c9e9a0e1-bcb5-46b6-aa90-13b32cd67312> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.845287] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:04f2a84c-46f4-4e4e-b768-69803562bcf0> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.845476] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4eed8871-bb21-452d-9865-fd9715d73dfe> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.845664] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:024d878b-fe8d-4c8c-adb9-88e2e0b6bd92> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.845857] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:782affbc-913e-4075-9151-7538af93aadd> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846048] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e246725e-3c98-455e-88ce-bb61954da472> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846233] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:fea3e55d-b1a1-4c5a-b691-b764c696c121> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846430] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:66da5186-89b7-4f38-9214-f7ed2617134c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846620] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:fe9bf94a-8e7b-48b6-ba09-1e1f6f3c9eba> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846805] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:5d91c0ac-6d28-44cf-a290-110865723132> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.846989] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:f0f790d9-41d9-4fbd-b48e-941d13ec6f2a> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.847177] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:47983946-6474-47c1-8554-fcd869eb2e5d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.847359] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0a3f2e8c-99db-46f4-a086-3e1456ced498> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.847542] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a2530a30-81fc-41f6-8cfe-a3a152f01c0e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.847732] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:8480b019-2de3-4912-afb3-e7ca9163d6fb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.847931] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:16f458b1-1966-4ace-956b-dd4107125aa2> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.848129] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:1772c441-d15b-43de-ba96-a318bf829710> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.848318] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:432392cc-41ba-472e-bdf2-1d68b419509c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.848505] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:40f1c08e-0002-4540-87ea-b08d7b561062> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.848689] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:ef75027a-a0e7-41ce-b509-5ee2dc41b78b> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.848870] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6c5aa6a1-f136-454b-9d2d-45454183bd53> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.849054] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a6c9259d-1be5-4ade-8d43-cf48143feff7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.849241] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4cd1302f-bcb5-45c2-9896-3797690cc5d4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.849421] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:9f573719-f099-4717-8147-b9adc463b4be> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:46:50.849610] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4104e74f-163d-4c5b-803e-7195dffb2a59> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:47:22.713883] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 16:47:22.714345] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 16:47:22.714382] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 16:47:22.715275] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 16:51:53.651187] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:784b8e28-1746-4c9c-b9d2-8e91a3c0ffd8> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.651469] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:32a82617-0f8a-4539-8cd7-3df296f7058f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.651705] E [afr-self-heald.c:685:_link_inode_update_loc] 0-dist-rep-rhevh-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:51:53.652154] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:382ba545-6c58-4ff7-a1fc-62bf6b95b5a9> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.652400] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:01760070-45f6-4883-b8a4-0e50c6db1a8e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.652624] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:871a0603-e18f-4407-ad16-e0911acde1d4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.652828] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4e6eae97-360f-4660-94d5-7006620be7cc> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653011] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a4b75ab5-3e63-4006-bfbb-fdbb6c16c353> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653203] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e364e3c6-48aa-442c-b08a-949758c03323> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653407] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:7fb8a645-148e-451c-8cc8-fe99b5cebe91> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653605] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:ea9970d3-ff6f-480e-bb3b-905842539028> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653798] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:d4035238-403b-47be-8a28-242a4b9e5d2d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.653978] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0d1c9935-d7d0-4880-9034-6ee1f63587a4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.654165] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:91fbcb48-d222-4d4a-a7c1-a373c343fffc> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.654363] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:3079f8de-0293-4b72-8b55-f16dd30009ee> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.654562] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:27dccc32-bd3d-4b30-9359-c10eacf0753f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.654744] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a29e0c10-367f-42df-b8e8-ac1de9f22e03> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.654929] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:bd8235e0-83d8-42f3-a2c0-1ddde4719813> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.655118] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:3b911d40-ac4c-4154-8aa3-5d58114f9c1d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.655312] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:f94f703f-ec42-4889-92ea-7a64f1b4f6ba> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.655508] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0a70837c-3fd4-4a20-bd2d-ca433d14dde7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.655707] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e89ebe5c-57d8-480f-8d30-8c9c9f8a0c6d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.655888] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4c6165c4-0934-4b42-a893-eb9c66bf590b> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656073] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:eff1fa56-8d83-4ca2-a0e0-8dca15d2dfcb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656262] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:cd7eb4e0-f14e-447d-bf4d-c4b27095fa27> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656449] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6905986c-bd8c-48b0-8bf9-5693d71e2ffb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656627] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:23cde0cc-f0f2-4183-9ca0-9ef500d3a971> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656804] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:7e81eddc-4d59-4a53-ae1e-6009032b3d4c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.656980] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:b2c46935-773d-4077-a735-5ee444002fa3> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.657167] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:04c7442a-4029-47fa-84fe-b44f1cbcd823> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.657363] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:5a6ed272-6813-455d-8bfb-3a1c7c5026e8> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.657563] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:411a3f2d-3208-4eff-9418-824e4f8102fe> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.657749] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:346723c5-2814-4b34-8f49-1aed20c1a66e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.657927] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:dbd5a06f-a5cf-44fd-abfc-d27f99c649ce> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.658120] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:287c07c6-8c54-4529-be01-f28d161e78f3> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.658334] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:b52fbab3-206f-4bc7-9e7d-3ef78ec7b801> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.658530] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e64d725d-70b8-4c38-b929-212dda6ddadd> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.658731] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:008c2e9c-3fc4-4a9a-ac56-b6afa7b48f76> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.658912] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6103b85a-dddb-4928-8040-9c76aa53a47f> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.659121] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:dbff056d-cb7e-4921-b1f4-22c58a43a9d7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.659336] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:1d562a23-5d81-4323-b9c0-b2c4d053b9c7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.659536] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:2064ef58-56d9-473a-a36d-f961df2dcd04> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.659733] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:583551af-edc1-4ef8-9960-fde6d511b2c1> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.659914] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:c9e9a0e1-bcb5-46b6-aa90-13b32cd67312> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.660095] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:04f2a84c-46f4-4e4e-b768-69803562bcf0> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.660288] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4eed8871-bb21-452d-9865-fd9715d73dfe> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.660468] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:024d878b-fe8d-4c8c-adb9-88e2e0b6bd92> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.660651] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:782affbc-913e-4075-9151-7538af93aadd> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.660828] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:e246725e-3c98-455e-88ce-bb61954da472> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661005] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:fea3e55d-b1a1-4c5a-b691-b764c696c121> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661187] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:66da5186-89b7-4f38-9214-f7ed2617134c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661386] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:fe9bf94a-8e7b-48b6-ba09-1e1f6f3c9eba> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661588] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:5d91c0ac-6d28-44cf-a290-110865723132> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661768] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:f0f790d9-41d9-4fbd-b48e-941d13ec6f2a> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.661945] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:47983946-6474-47c1-8554-fcd869eb2e5d> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.662138] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:0a3f2e8c-99db-46f4-a086-3e1456ced498> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.662340] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a2530a30-81fc-41f6-8cfe-a3a152f01c0e> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.662546] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:8480b019-2de3-4912-afb3-e7ca9163d6fb> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.662748] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:16f458b1-1966-4ace-956b-dd4107125aa2> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.662928] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:1772c441-d15b-43de-ba96-a318bf829710> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.663113] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:432392cc-41ba-472e-bdf2-1d68b419509c> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.663308] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:40f1c08e-0002-4540-87ea-b08d7b561062> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.663511] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:ef75027a-a0e7-41ce-b509-5ee2dc41b78b> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.663708] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:6c5aa6a1-f136-454b-9d2d-45454183bd53> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.663889] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:a6c9259d-1be5-4ade-8d43-cf48143feff7> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.664073] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4cd1302f-bcb5-45c2-9896-3797690cc5d4> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.664253] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:9f573719-f099-4717-8147-b9adc463b4be> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:51:53.664444] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No such file or directory. Path: <gfid:4104e74f-163d-4c5b-803e-7195dffb2a59> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-26 16:53:40.760892] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.761631] W [client3_1-fops.c:473:client3_1_open_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory. Path: <gfid:3117cd8c-a6a6-45ee-ac66-54c261a8c172> (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:53:40.761673] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-replicate-rhevh-replicate-0: open of <gfid:3117cd8c-a6a6-45ee-ac66-54c261a8c172> failed on child replicate-rhevh-client-0 (No such file or directory) >[2012-09-26 16:53:40.763633] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.764182] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.780090] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:15bd1717-9618-48a4-a763-893052375663>. >[2012-09-26 16:53:40.781376] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.782207] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.782432] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:7c52e348-b9f2-4aca-ae87-3d53d29b48f7>. >[2012-09-26 16:53:40.798404] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.798887] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.799030] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:7aa29745-4113-4a3c-bfe4-da5f70d15446>. >[2012-09-26 16:53:40.800423] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.800893] W [client3_1-fops.c:473:client3_1_open_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory. Path: <gfid:8d53e4a7-803a-4845-9c93-4704911ed1ac> (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:53:40.800939] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-replicate-rhevh-replicate-0: open of <gfid:8d53e4a7-803a-4845-9c93-4704911ed1ac> failed on child replicate-rhevh-client-0 (No such file or directory) >[2012-09-26 16:53:40.805746] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.806198] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.806355] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:26aec955-2790-43e7-a7fa-9e031fd61b4e>. >[2012-09-26 16:53:40.817569] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 16:53:40.817944] W [client3_1-fops.c:473:client3_1_open_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory. Path: <gfid:f1304fac-4d94-4886-b203-0f03f109d146> (00000000-0000-0000-0000-000000000000) >[2012-09-26 16:53:40.817971] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-replicate-rhevh-replicate-0: open of <gfid:f1304fac-4d94-4886-b203-0f03f109d146> failed on child replicate-rhevh-client-0 (No such file or directory) >[2012-09-26 17:03:40.901214] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 17:03:40.901655] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 17:03:40.901800] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:15bd1717-9618-48a4-a763-893052375663>. >[2012-09-26 17:03:40.902563] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 17:03:40.902921] W [client3_1-fops.c:1650:client3_1_entrylk_cbk] 0-replicate-rhevh-client-0: remote operation failed: No such file or directory >[2012-09-26 17:03:40.903064] E [afr-self-heal-entry.c:2376:afr_sh_post_nonblocking_entry_cbk] 0-replicate-rhevh-replicate-0: Non Blocking entrylks failed for <gfid:7c52e348-b9f2-4aca-ae87-3d53d29b48f7>. >[2012-09-26 18:14:45.648951] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-26 18:14:45.652770] E [rpcsvc.c:1155:rpcsvc_program_unregister_portmap] 0-rpc-service: Could not unregister with portmap >[2012-09-26 18:50:37.312703] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-26 18:50:37.589004] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 18:50:37.589037] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 18:50:37.589056] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 18:50:37.589070] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 18:50:37.589084] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-26 18:50:37.600671] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-26 18:50:37.600730] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-26 18:50:37.600766] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 18:50:37.605128] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 18:50:37.611324] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-26 18:50:37.615235] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-26 18:50:37.619290] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-26 18:50:37.623219] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-26 18:50:37.627830] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-26 18:50:37.627895] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-26 18:50:37.628097] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-26 18:50:37.628161] E [client-handshake.c:1695:client_query_portmap_cbk] 0-dist-rep-rhevh-client-3: failed to get the port number for remote subvolume >[2012-09-26 18:50:37.628226] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-26 18:50:37.628282] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-26 18:50:37.628326] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-26 18:50:41.180944] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-26 18:50:41.180989] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-26 18:50:41.355565] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:41.355946] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-26 18:50:41.355978] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:41.356050] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-26 18:50:41.356195] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-26 18:50:41.359612] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:41.359827] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-26 18:50:41.359853] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:41.359995] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-26 18:50:41.364040] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:41.364304] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-26 18:50:41.364340] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:41.364410] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-26 18:50:41.366025] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-26 18:50:41.368574] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-26 18:50:41.372626] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:41.372969] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-26 18:50:41.373019] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:41.373096] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-26 18:50:41.373674] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-26 18:50:41.376911] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:41.377499] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-26 18:50:41.377530] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:41.378121] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-26 18:50:44.381814] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-26 18:50:44.382131] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-26 18:50:44.382161] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-26 18:50:44.382721] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-26 18:53:48.035614] I [afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-dist-rep-rhevh-replicate-0: Another crawl is in progress for dist-rep-rhevh-client-1 >[2012-09-26 19:00:41.446383] I [afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-dist-rep-rhevh-replicate-0: Another crawl is in progress for dist-rep-rhevh-client-1 >[2012-09-26 19:01:32.301938] I [afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-dist-rep-rhevh-replicate-0: Another crawl is in progress for dist-rep-rhevh-client-1 >[2012-09-26 19:10:41.511496] I [afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-dist-rep-rhevh-replicate-0: Another crawl is in progress for dist-rep-rhevh-client-1 >[2012-09-26 19:20:41.571530] I [afr-self-heald.c:1082:afr_dir_exclusive_crawl] 0-dist-rep-rhevh-replicate-0: Another crawl is in progress for dist-rep-rhevh-client-1 >[2012-09-26 19:23:35.778728] W [client3_1-fops.c:876:client3_1_writev_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No space left on device >[2012-09-26 19:23:35.778781] E [afr-self-heal-algorithm.c:434:sh_loop_write_cbk] 0-dist-rep-rhevh-replicate-0: write to /1676787f-383a-481e-8c2a-04589b7b8d72/images/40717d4a-9bb9-471a-bcbb-382ae4803075/1b8642da-647e-4933-bc1f-c548fd9438b4 failed on subvolume dist-rep-rhevh-client-1 (No space left on device) >[2012-09-26 19:23:35.780989] W [client3_1-fops.c:1047:client3_1_setxattr_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: No space left on device >[2012-09-26 19:23:35.781028] I [afr-self-heal-metadata.c:236:afr_sh_metadata_sync_cbk] 0-dist-rep-rhevh-replicate-0: setting attributes failed for /1676787f-383a-481e-8c2a-04589b7b8d72/images/407[2012-09-27 11:40:00.266767] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:40:01.385514] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:40:01.417619] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 11:40:15.626506] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:40:16.734997] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:40:16.736828] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 11:40:35.804071] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-27 11:40:36.810209] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-27 11:40:36.820897] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 11:40:36.820942] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 11:40:36.820961] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 11:40:36.820976] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 11:40:36.820990] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 11:40:36.835322] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-27 11:40:36.835410] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-27 11:40:36.835450] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 11:40:36.840059] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 11:40:36.844295] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 11:40:36.848572] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 11:40:36.852745] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-27 11:40:36.856815] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-27 11:40:36.861625] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-27 11:40:36.861698] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-27 11:40:36.861757] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-27 11:40:36.861844] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-27 11:40:36.861920] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-27 11:40:36.861960] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-27 11:40:40.711798] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-27 11:40:40.711858] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-27 11:40:40.823625] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.823945] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-27 11:40:40.823978] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.824037] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-27 11:40:40.824165] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-27 11:40:40.827661] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.828045] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-27 11:40:40.828080] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.828138] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-0' came back up; going online. >[2012-09-27 11:40:40.828387] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-27 11:40:40.833551] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.833795] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-27 11:40:40.833830] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.835504] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-27 11:40:40.837829] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.838155] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-27 11:40:40.838189] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.838761] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-27 11:40:40.842125] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.842522] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-27 11:40:40.842558] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.842635] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-27 11:40:40.843189] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-27 11:40:40.846221] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 11:40:40.846580] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-27 11:40:40.846603] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 11:40:40.847168] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-27 11:41:32.537415] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:41:33.563788] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 11:41:33.566389] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 11:46:19.491388] W [client3_1-fops.c:876:client3_1_writev_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: Invalid argument >[2012-09-27 11:46:19.491429] E [afr-self-heal-algorithm.c:434:sh_loop_write_cbk] 0-dist-rep-rhevh-replicate-0: write to /1676787f-383a-481e-8c2a-04589b7b8d72/images/f23735c3-6693-463d-8851-d2ca43f6a5fc/dff698e9-207f-488c-9f6b-5f336980baae.meta failed on subvolume dist-rep-rhevh-client-1 (Invalid argument) >[2012-09-27 11:46:19.502081] W [client3_1-fops.c:876:client3_1_writev_cbk] 0-dist-rep-rhevh-client-1: remote operation failed: Invalid argument >[2012-09-27 11:46:19.502117] E [afr-self-heal-algorithm.c:434:sh_loop_write_cbk] 0-dist-rep-rhevh-replicate-0: write to /1676787f-383a-481e-8c2a-04589b7b8d72/master/vms/7fe86c70-7aef-40bc-bee4-5acefd6c12ec/7fe86c70-7aef-40bc-bee4-5acefd6c12ec.ovf failed on subvolume dist-rep-rhevh-client-1 (Invalid argument) >[2012-09-27 12:40:41.281456] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-rhevh-replicate-0: no active sinks for performing self-heal on file <gfid:5dd06050-9f23-47fd-9d4f-9776e89502de> >[2012-09-27 12:50:41.355157] I [afr-self-heal-data.c:712:afr_sh_data_fix] 0-dist-rep-rhevh-replicate-0: no active sinks for performing self-heal on file <gfid:5dd06050-9f23-47fd-9d4f-9776e89502de> >[2012-09-27 12:58:36.381888] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 12:58:37.416712] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 12:58:37.418489] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 13:00:45.460312] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-27 13:00:46.466236] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-27 13:00:46.476499] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 13:00:46.476535] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-1: adding option 'node-uuid' for volume 'replicate-rhevh-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 13:00:46.476554] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh-client-0: adding option 'node-uuid' for volume 'replicate-rhevh-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 13:00:46.476569] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 13:00:46.476583] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 13:00:46.489397] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-1: option 'node-uuid' is not recognized >[2012-09-27 13:00:46.489476] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh-client-0: option 'node-uuid' is not recognized >[2012-09-27 13:00:46.489514] I [client.c:2142:notify] 0-replicate-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 13:00:46.493981] I [client.c:2142:notify] 0-replicate-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 13:00:46.497968] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 13:00:46.501975] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 13:00:46.506037] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-27 13:00:46.510007] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume replicate-rhevh-client-0 > 60: type protocol/client > 61: option remote-host rhs-client6.lab.eng.blr.redhat.com > 62: option remote-subvolume /disk2 > 63: option transport-type tcp > 64: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 65: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 66: end-volume > 67: > 68: volume replicate-rhevh-client-1 > 69: type protocol/client > 70: option remote-host rhs-client7.lab.eng.blr.redhat.com > 71: option remote-subvolume /disk2 > 72: option transport-type tcp > 73: option username 49b024b6-86a6-428c-b173-c88ac0d75afd > 74: option password f02532e9-4a16-4eb1-b7e2-3782a35d3137 > 75: end-volume > 76: > 77: volume replicate-rhevh-replicate-0 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes replicate-rhevh-client-0 replicate-rhevh-client-1 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes replicate-rhevh-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-27 13:00:46.514552] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-0: changing port to 24012 (from 0) >[2012-09-27 13:00:46.514625] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh-client-1: changing port to 24012 (from 0) >[2012-09-27 13:00:46.514790] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-27 13:00:46.514878] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-27 13:00:46.514928] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-27 13:00:46.514990] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-3: changing port to 24011 (from 0) >[2012-09-27 13:00:50.350262] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-27 13:00:50.350313] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-27 13:00:50.479209] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.479489] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-0: Connected to 10.70.36.30:24012, attached to remote volume '/disk2'. >[2012-09-27 13:00:50.479515] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.479567] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh-replicate-0: Subvolume 'replicate-rhevh-client-0' came back up; going online. >[2012-09-27 13:00:50.479666] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-0: Server lk version = 1 >[2012-09-27 13:00:50.483157] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.483427] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh-client-1: Connected to 10.70.36.31:24012, attached to remote volume '/disk2'. >[2012-09-27 13:00:50.483459] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.483614] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh-client-1: Server lk version = 1 >[2012-09-27 13:00:50.487883] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.488196] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-27 13:00:50.488226] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.488304] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-27 13:00:50.490208] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-27 13:00:50.491961] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.492256] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-27 13:00:50.492281] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.492846] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-27 13:00:50.498261] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.498653] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-27 13:00:50.498684] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.498758] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-27 13:00:50.499347] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-27 13:00:50.502481] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-3: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 13:00:50.502838] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Connected to 10.70.36.33:24011, attached to remote volume '/disk1'. >[2012-09-27 13:00:50.502863] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-3: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 13:00:50.503436] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-3: Server lk version = 1 >[2012-09-27 13:13:38.085673] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 13:13:39.156365] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 13:13:39.198906] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 13:48:33.653569] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 13:48:34.685652] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 13:48:34.687905] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 16:03:14.525881] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24012) >[2012-09-27 16:03:14.572424] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-0: disconnected >[2012-09-27 16:03:16.902475] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24012) >[2012-09-27 16:03:16.902563] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh-client-1: disconnected >[2012-09-27 16:03:16.902586] E [afr-common.c:3668:afr_notify] 0-replicate-rhevh-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up. >[2012-09-27 16:03:18.475764] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-27 16:03:19.482205] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-27 16:03:19.522254] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:03:19.522288] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:03:19.530871] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 16:03:19.544259] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 16:03:19.548299] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-27 16:03:19.552243] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option iam-self-heal-daemon yes > 45: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 46: end-volume > 47: > 48: volume dist-rep-rhevh-replicate-1 > 49: type cluster/replicate > 50: option background-self-heal-count 0 > 51: option metadata-self-heal on > 52: option data-self-heal on > 53: option entry-self-heal on > 54: option self-heal-daemon on > 55: option iam-self-heal-daemon yes > 56: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 57: end-volume > 58: > 59: volume glustershd > 60: type debug/io-stats > 61: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 62: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-27 16:03:19.575470] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-27 16:03:19.575530] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-27 16:03:19.575596] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-27 16:03:23.026707] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-27 16:03:23.026767] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-27 16:03:23.497140] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:03:23.497428] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-27 16:03:23.497460] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:03:23.497547] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-27 16:03:23.498150] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-27 16:03:23.501534] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:03:23.501910] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-27 16:03:23.501945] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:03:23.502016] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-27 16:03:23.502181] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-27 16:03:23.506372] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:03:23.506760] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-27 16:03:23.506784] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:03:23.508496] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-27 16:09:18.236655] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.30:24011) >[2012-09-27 16:09:18.236760] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-27 16:09:22.493088] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-27 16:09:22.510040] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-27 16:09:28.556001] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:09:28.556376] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-27 16:09:28.556401] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:09:28.556973] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-27 16:10:06.715153] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-27 16:10:07.721321] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-27 16:10:07.731795] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:10:07.731833] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:10:07.731849] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh2-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:10:07.731864] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-client-1: adding option 'node-uuid' for volume 'replicate-rhevh2-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:10:07.731878] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-client-0: adding option 'node-uuid' for volume 'replicate-rhevh2-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-27 16:10:07.744699] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh2-client-1: option 'node-uuid' is not recognized >[2012-09-27 16:10:07.744783] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh2-client-0: option 'node-uuid' is not recognized >[2012-09-27 16:10:07.744811] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 16:10:07.749018] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-27 16:10:07.752938] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-27 16:10:07.757495] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >[2012-09-27 16:10:07.761306] I [client.c:2142:notify] 0-replicate-rhevh2-client-0: parent translators are ready, attempting connect on transport >[2012-09-27 16:10:07.765547] I [client.c:2142:notify] 0-replicate-rhevh2-client-1: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume replicate-rhevh2-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /replicate-disk > 5: option transport-type tcp > 6: option username ed47646f-f966-4cda-9e0f-1417c35dd7c1 > 7: option password 3c031854-e508-409c-a58d-0fc991070ed0 > 8: end-volume > 9: > 10: volume replicate-rhevh2-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /replicate-disk > 14: option transport-type tcp > 15: option username ed47646f-f966-4cda-9e0f-1417c35dd7c1 > 16: option password 3c031854-e508-409c-a58d-0fc991070ed0 > 17: end-volume > 18: > 19: volume replicate-rhevh2-replicate-0 > 20: type cluster/replicate > 21: option background-self-heal-count 0 > 22: option metadata-self-heal on > 23: option data-self-heal on > 24: option entry-self-heal on > 25: option self-heal-daemon on > 26: option iam-self-heal-daemon yes > 27: subvolumes replicate-rhevh2-client-0 replicate-rhevh2-client-1 > 28: end-volume > 29: > 30: volume dist-rep-rhevh-client-0 > 31: type protocol/client > 32: option remote-host rhs-client6.lab.eng.blr.redhat.com > 33: option remote-subvolume /disk1 > 34: option transport-type tcp > 35: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 36: option password 1233f788-e862-447a-9353-3a50d84656ca > 37: end-volume > 38: > 39: volume dist-rep-rhevh-client-1 > 40: type protocol/client > 41: option remote-host rhs-client7.lab.eng.blr.redhat.com > 42: option remote-subvolume /disk1 > 43: option transport-type tcp > 44: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 45: option password 1233f788-e862-447a-9353-3a50d84656ca > 46: end-volume > 47: > 48: volume dist-rep-rhevh-client-2 > 49: type protocol/client > 50: option remote-host rhs-client8.lab.eng.blr.redhat.com > 51: option remote-subvolume /disk1 > 52: option transport-type tcp > 53: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 54: option password 1233f788-e862-447a-9353-3a50d84656ca > 55: end-volume > 56: > 57: volume dist-rep-rhevh-client-3 > 58: type protocol/client > 59: option remote-host rhs-client9.lab.eng.blr.redhat.com > 60: option remote-subvolume /disk1 > 61: option transport-type tcp > 62: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 63: option password 1233f788-e862-447a-9353-3a50d84656ca > 64: end-volume > 65: > 66: volume dist-rep-rhevh-replicate-0 > 67: type cluster/replicate > 68: option background-self-heal-count 0 > 69: option metadata-self-heal on > 70: option data-self-heal on > 71: option entry-self-heal on > 72: option self-heal-daemon on > 73: option iam-self-heal-daemon yes > 74: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 75: end-volume > 76: > 77: volume dist-rep-rhevh-replicate-1 > 78: type cluster/replicate > 79: option background-self-heal-count 0 > 80: option metadata-self-heal on > 81: option data-self-heal on > 82: option entry-self-heal on > 83: option self-heal-daemon on > 84: option iam-self-heal-daemon yes > 85: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 86: end-volume > 87: > 88: volume glustershd > 89: type debug/io-stats > 90: subvolumes dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 replicate-rhevh2-replicate-0 > 91: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-27 16:10:07.769920] E [client-handshake.c:1695:client_query_portmap_cbk] 0-replicate-rhevh2-client-1: failed to get the port number for remote subvolume >[2012-09-27 16:10:07.769992] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-27 16:10:07.770075] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh2-client-1: disconnected >[2012-09-27 16:10:07.770118] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-27 16:10:07.770166] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-27 16:10:07.770204] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh2-client-0: changing port to 24013 (from 0) >[2012-09-27 16:10:11.111890] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-27 16:10:11.111941] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-27 16:10:11.734261] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh2-client-1: changing port to 24013 (from 0) >[2012-09-27 16:10:11.738075] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:10:11.738362] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-27 16:10:11.738392] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:10:11.738463] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-1' came back up; going online. >[2012-09-27 16:10:11.738572] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-27 16:10:11.742158] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:10:11.742522] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-27 16:10:11.742558] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:10:11.742629] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-27 16:10:11.743868] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-27 16:10:11.746605] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:10:11.746960] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-27 16:10:11.746994] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:10:11.748024] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-27 16:10:11.750789] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh2-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:10:11.751206] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Connected to 10.70.36.30:24013, attached to remote volume '/replicate-disk'. >[2012-09-27 16:10:11.751232] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:10:11.751281] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh2-replicate-0: Subvolume 'replicate-rhevh2-client-0' came back up; going online. >[2012-09-27 16:10:11.752127] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh2-client-0: Server lk version = 1 >[2012-09-27 16:10:14.756936] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh2-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-27 16:10:14.760649] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh2-client-1: Connected to 10.70.36.31:24013, attached to remote volume '/replicate-disk'. >[2012-09-27 16:10:14.760685] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh2-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-27 16:10:14.761787] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh2-client-1: Server lk version = 1 >[2012-09-27 16:40:08.971637] E [rpc-clnt.c:208:call_bail] 0-dist-rep-rhevh-client-3: bailing out frame type(GF-DUMP) op(DUMP(1)) xid = 0x1x sent = 2012-09-27 16:10:07.769614. timeout = 1800 >[2012-09-27 16:40:08.971703] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 08:46:15.236415] E [afr-self-heald.c:685:_link_inode_update_loc] 0-replicate-rhevh2-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000) >[2012-09-28 08:54:40.781451] E [afr-self-heald.c:685:_link_inode_update_loc] 0-replicate-rhevh2-replicate-0: inode link failed on the inode (00000000-0000-0000-0000-000000000000) >[2012-09-28 10:29:49.978898] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-0: readv failed (Connection timed out) >[2012-09-28 10:29:49.978961] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-0: reading from socket failed. Error (Connection timed out), peer (10.70.36.30:24011) >[2012-09-28 10:29:49.979002] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-0: disconnected >[2012-09-28 10:29:59.189889] W [socket.c:195:__socket_rwv] 0-replicate-rhevh2-client-0: readv failed (Connection timed out) >[2012-09-28 10:29:59.189937] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh2-client-0: reading from socket failed. Error (Connection timed out), peer (10.70.36.30:24013) >[2012-09-28 10:29:59.189972] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh2-client-0: disconnected >[2012-09-28 10:30:18.646562] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 10:30:21.639887] E [socket.c:1715:socket_connect_finish] 0-dist-rep-rhevh-client-0: connection to 10.70.36.30:24011 failed (Connection timed out) >[2012-09-28 10:30:21.647349] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 10:30:30.644880] E [socket.c:1715:socket_connect_finish] 0-replicate-rhevh2-client-0: connection to 10.70.36.30:24013 failed (Connection timed out) >[2012-09-28 10:40:19.227695] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 10:40:22.228534] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 10:50:19.908120] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 10:50:22.916916] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:00:20.601103] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:00:23.601959] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:10:21.205382] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:10:24.206174] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:20:21.807904] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:20:24.812862] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:30:22.542412] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:30:25.543230] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:40:23.141557] E [afr-self-heald.c:418:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:40:26.146612] E [afr-self-heald.c:418:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl as < 2 children are up >[2012-09-28 11:41:35.927086] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-2: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.32:24011) >[2012-09-28 11:41:35.927191] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-2: disconnected >[2012-09-28 11:41:46.213023] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 11:41:46.213470] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-28 11:41:46.213509] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 11:41:46.213575] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-28 11:41:46.237473] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-28 11:41:49.590281] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:41:50.731705] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:41:50.769963] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:42:03.208750] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:42:03.218119] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:42:21.827604] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:42:21.836592] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:42:21.837269] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:42:21.837812] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:43:00.729558] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:43:00.758138] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 11:43:00.758752] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:43:00.759543] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 11:49:35.820828] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh2-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 11:49:35.821236] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Connected to 10.70.36.30:24013, attached to remote volume '/replicate-disk'. >[2012-09-28 11:49:35.821292] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 11:49:35.822239] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh2-client-0: Server lk version = 1 >[2012-09-28 11:49:35.825024] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 11:49:35.825383] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-28 11:49:35.825411] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 11:49:35.826306] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-28 11:50:26.847498] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh2-client-1: remote operation failed: No such file or directory >[2012-09-28 11:50:26.848567] W [client3_1-fops.c:473:client3_1_open_cbk] 0-replicate-rhevh2-client-1: remote operation failed: No such file or directory. Path: <gfid:380671bc-5475-447c-b7b9-1c865b2e2dbf> (00000000-0000-0000-0000-000000000000) >[2012-09-28 11:50:26.848604] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] 0-replicate-rhevh2-replicate-0: open of <gfid:380671bc-5475-447c-b7b9-1c865b2e2dbf> failed on child replicate-rhevh2-client-1 (No such file or directory) >[2012-09-28 11:50:26.849396] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-replicate-rhevh2-client-1: remote operation failed: No such file or directory. Path: <gfid:380671bc-5475-447c-b7b9-1c865b2e2dbf> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-28 12:00:19.532100] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (::1:24007) >[2012-09-28 12:00:20.716146] W [socket.c:1512:__socket_proto_state_machine] 0-replicate-rhevh2-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24013) >[2012-09-28 12:00:20.716282] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh2-client-1: disconnected >[2012-09-28 12:00:20.740094] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-1: reading from socket failed. Error (Transport endpoint is not connected), peer (10.70.36.31:24011) >[2012-09-28 12:00:20.740177] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-28 12:00:20.766708] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh2-client-1: remote operation failed: Transport endpoint is not connected >[2012-09-28 12:00:20.784919] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh2-client-1: remote operation failed: Transport endpoint is not connected >[2012-09-28 12:00:20.784990] W [client3_1-fops.c:1550:client3_1_inodelk_cbk] 0-replicate-rhevh2-client-1: remote operation failed: Transport endpoint is not connected >[2012-09-28 12:00:20.785018] W [client3_1-fops.c:3922:client3_1_readv] 0-replicate-rhevh2-client-1: (00000000-0000-0000-0000-000000000000) remote_fd is -1. EBADFD >[2012-09-28 12:00:20.785033] E [afr-self-heal-algorithm.c:512:sh_loop_read_cbk] 0-replicate-rhevh2-replicate-0: read failed on 1 for <gfid:2844ea62-845e-42ce-b33f-dcb93edb4700> reason :Transport endpoint is not connected >[2012-09-28 12:00:20.785093] W [client3_1-fops.c:4078:client3_1_flush] 0-replicate-rhevh2-client-1: (00000000-0000-0000-0000-000000000000) remote_fd is -1. EBADFD >[2012-09-28 12:00:20.785112] E [afr-self-heal-data.c:97:afr_sh_data_flush_cbk] 0-replicate-rhevh2-replicate-0: flush failed on <gfid:2844ea62-845e-42ce-b33f-dcb93edb4700> on subvolume replicate-rhevh2-client-1: File descriptor in bad state >[2012-09-28 12:00:20.785415] W [client3_1-fops.c:1114:client3_1_getxattr_cbk] 0-replicate-rhevh2-client-1: remote operation failed: Transport endpoint is not connected. Path: <gfid:2844ea62-845e-42ce-b33f-dcb93edb4700> (00000000-0000-0000-0000-000000000000). Key: glusterfs.gfid2path >[2012-09-28 12:00:20.785464] E [afr-self-heald.c:409:_crawl_proceed] 0-replicate-rhevh2-replicate-0: Stopping crawl for replicate-rhevh2-client-1 , subvol went down >[2012-09-28 12:00:20.785498] W [client3_1-fops.c:2286:client3_1_readdir_cbk] 0-replicate-rhevh2-client-1: remote operation failed: Transport endpoint is not connected remote_fd = -2 >[2012-09-28 12:00:23.874608] E [afr-self-heald.c:409:_crawl_proceed] 0-dist-rep-rhevh-replicate-0: Stopping crawl for dist-rep-rhevh-client-1 , subvol went down >[2012-09-28 12:00:24.284206] W [glusterfsd.c:906:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x322d2e5ccd] (-->/lib64/libpthread.so.0() [0x322da077f1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405d2d]))) 0-: received signum (15), shutting down >[2012-09-28 12:00:25.286496] I [glusterfsd.c:1741:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.0rhsvirt1 >[2012-09-28 12:00:25.336146] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-replicate-0: adding option 'node-uuid' for volume 'replicate-rhevh2-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-28 12:00:25.336183] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-client-1: adding option 'node-uuid' for volume 'replicate-rhevh2-client-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-28 12:00:25.336199] I [graph.c:241:gf_add_cmdline_options] 0-replicate-rhevh2-client-0: adding option 'node-uuid' for volume 'replicate-rhevh2-client-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-28 12:00:25.336213] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-1: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-1' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-28 12:00:25.336227] I [graph.c:241:gf_add_cmdline_options] 0-dist-rep-rhevh-replicate-0: adding option 'node-uuid' for volume 'dist-rep-rhevh-replicate-0' with value 'b9d6cb21-051f-4791-9476-734856e77fbf' >[2012-09-28 12:00:25.351238] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh2-client-1: option 'node-uuid' is not recognized >[2012-09-28 12:00:25.351362] W [graph.c:316:_log_if_unknown_option] 0-replicate-rhevh2-client-0: option 'node-uuid' is not recognized >[2012-09-28 12:00:25.351433] I [client.c:2142:notify] 0-replicate-rhevh2-client-0: parent translators are ready, attempting connect on transport >[2012-09-28 12:00:25.356820] I [client.c:2142:notify] 0-replicate-rhevh2-client-1: parent translators are ready, attempting connect on transport >[2012-09-28 12:00:25.360867] I [client.c:2142:notify] 0-dist-rep-rhevh-client-0: parent translators are ready, attempting connect on transport >[2012-09-28 12:00:25.364882] I [client.c:2142:notify] 0-dist-rep-rhevh-client-1: parent translators are ready, attempting connect on transport >[2012-09-28 12:00:25.368890] I [client.c:2142:notify] 0-dist-rep-rhevh-client-2: parent translators are ready, attempting connect on transport >[2012-09-28 12:00:25.372770] I [client.c:2142:notify] 0-dist-rep-rhevh-client-3: parent translators are ready, attempting connect on transport >Given volfile: >+------------------------------------------------------------------------------+ > 1: volume dist-rep-rhevh-client-0 > 2: type protocol/client > 3: option remote-host rhs-client6.lab.eng.blr.redhat.com > 4: option remote-subvolume /disk1 > 5: option transport-type tcp > 6: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 7: option password 1233f788-e862-447a-9353-3a50d84656ca > 8: end-volume > 9: > 10: volume dist-rep-rhevh-client-1 > 11: type protocol/client > 12: option remote-host rhs-client7.lab.eng.blr.redhat.com > 13: option remote-subvolume /disk1 > 14: option transport-type tcp > 15: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 16: option password 1233f788-e862-447a-9353-3a50d84656ca > 17: end-volume > 18: > 19: volume dist-rep-rhevh-client-2 > 20: type protocol/client > 21: option remote-host rhs-client8.lab.eng.blr.redhat.com > 22: option remote-subvolume /disk1 > 23: option transport-type tcp > 24: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 25: option password 1233f788-e862-447a-9353-3a50d84656ca > 26: end-volume > 27: > 28: volume dist-rep-rhevh-client-3 > 29: type protocol/client > 30: option remote-host rhs-client9.lab.eng.blr.redhat.com > 31: option remote-subvolume /disk1 > 32: option transport-type tcp > 33: option username a6c017a0-c3d4-4411-aee5-fc1e0c88e5a8 > 34: option password 1233f788-e862-447a-9353-3a50d84656ca > 35: end-volume > 36: > 37: volume dist-rep-rhevh-replicate-0 > 38: type cluster/replicate > 39: option background-self-heal-count 0 > 40: option metadata-self-heal on > 41: option data-self-heal on > 42: option entry-self-heal on > 43: option self-heal-daemon on > 44: option eager-lock enable > 45: option iam-self-heal-daemon yes > 46: subvolumes dist-rep-rhevh-client-0 dist-rep-rhevh-client-1 > 47: end-volume > 48: > 49: volume dist-rep-rhevh-replicate-1 > 50: type cluster/replicate > 51: option background-self-heal-count 0 > 52: option metadata-self-heal on > 53: option data-self-heal on > 54: option entry-self-heal on > 55: option self-heal-daemon on > 56: option eager-lock enable > 57: option iam-self-heal-daemon yes > 58: subvolumes dist-rep-rhevh-client-2 dist-rep-rhevh-client-3 > 59: end-volume > 60: > 61: volume replicate-rhevh2-client-0 > 62: type protocol/client > 63: option remote-host rhs-client6.lab.eng.blr.redhat.com > 64: option remote-subvolume /replicate-disk > 65: option transport-type tcp > 66: option username ed47646f-f966-4cda-9e0f-1417c35dd7c1 > 67: option password 3c031854-e508-409c-a58d-0fc991070ed0 > 68: end-volume > 69: > 70: volume replicate-rhevh2-client-1 > 71: type protocol/client > 72: option remote-host rhs-client7.lab.eng.blr.redhat.com > 73: option remote-subvolume /replicate-disk > 74: option transport-type tcp > 75: option username ed47646f-f966-4cda-9e0f-1417c35dd7c1 > 76: option password 3c031854-e508-409c-a58d-0fc991070ed0 > 77: end-volume > 78: > 79: volume replicate-rhevh2-replicate-0 > 80: type cluster/replicate > 81: option background-self-heal-count 0 > 82: option metadata-self-heal on > 83: option data-self-heal on > 84: option entry-self-heal on > 85: option self-heal-daemon on > 86: option iam-self-heal-daemon yes > 87: subvolumes replicate-rhevh2-client-0 replicate-rhevh2-client-1 > 88: end-volume > 89: > 90: volume glustershd > 91: type debug/io-stats > 92: subvolumes replicate-rhevh2-replicate-0 dist-rep-rhevh-replicate-0 dist-rep-rhevh-replicate-1 > 93: end-volume > >+------------------------------------------------------------------------------+ >[2012-09-28 12:00:25.377230] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh2-client-0: changing port to 24013 (from 0) >[2012-09-28 12:00:25.377283] E [client-handshake.c:1695:client_query_portmap_cbk] 0-replicate-rhevh2-client-1: failed to get the port number for remote subvolume >[2012-09-28 12:00:25.377356] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-0: changing port to 24011 (from 0) >[2012-09-28 12:00:25.377396] E [client-handshake.c:1695:client_query_portmap_cbk] 0-dist-rep-rhevh-client-1: failed to get the port number for remote subvolume >[2012-09-28 12:00:25.377504] I [client.c:2090:client_rpc_notify] 0-replicate-rhevh2-client-1: disconnected >[2012-09-28 12:00:25.377542] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-1: disconnected >[2012-09-28 12:00:25.377706] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-2: changing port to 24011 (from 0) >[2012-09-28 12:00:28.902321] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8 >[2012-09-28 12:00:28.902382] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported >[2012-09-28 12:00:29.300228] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh2-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 12:00:29.300657] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Connected to 10.70.36.30:24013, attached to remote volume '/replicate-disk'. >[2012-09-28 12:00:29.300692] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh2-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 12:00:29.300768] I [afr-common.c:3631:afr_notify] 0-replicate-rhevh2-replicate-0: Subvolume 'replicate-rhevh2-client-0' came back up; going online. >[2012-09-28 12:00:29.300892] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh2-client-0: Server lk version = 1 >[2012-09-28 12:00:29.304303] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-replicate-rhevh2-client-1: changing port to 24013 (from 0) >[2012-09-28 12:00:29.308286] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-0: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 12:00:29.308613] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Connected to 10.70.36.30:24011, attached to remote volume '/disk1'. >[2012-09-28 12:00:29.308637] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-0: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 12:00:29.308684] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-0: Subvolume 'dist-rep-rhevh-client-0' came back up; going online. >[2012-09-28 12:00:29.308863] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-0: Server lk version = 1 >[2012-09-28 12:00:29.312434] I [rpc-clnt.c:1659:rpc_clnt_reconfig] 0-dist-rep-rhevh-client-1: changing port to 24011 (from 0) >[2012-09-28 12:00:29.316710] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-2: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 12:00:29.317082] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Connected to 10.70.36.32:24011, attached to remote volume '/disk1'. >[2012-09-28 12:00:29.317108] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-2: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 12:00:29.317157] I [afr-common.c:3631:afr_notify] 0-dist-rep-rhevh-replicate-1: Subvolume 'dist-rep-rhevh-client-2' came back up; going online. >[2012-09-28 12:00:29.318766] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-2: Server lk version = 1 >[2012-09-28 12:00:32.321684] I [client-handshake.c:1614:select_server_supported_programs] 0-replicate-rhevh2-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 12:00:32.322007] I [client-handshake.c:1411:client_setvolume_cbk] 0-replicate-rhevh2-client-1: Connected to 10.70.36.31:24013, attached to remote volume '/replicate-disk'. >[2012-09-28 12:00:32.322034] I [client-handshake.c:1423:client_setvolume_cbk] 0-replicate-rhevh2-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 12:00:32.322668] I [client-handshake.c:453:client_set_lk_version_cbk] 0-replicate-rhevh2-client-1: Server lk version = 1 >[2012-09-28 12:00:32.326463] I [client-handshake.c:1614:select_server_supported_programs] 0-dist-rep-rhevh-client-1: Using Program GlusterFS 3.3.0rhsvirt1, Num (1298437), Version (330) >[2012-09-28 12:00:32.326726] I [client-handshake.c:1411:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Connected to 10.70.36.31:24011, attached to remote volume '/disk1'. >[2012-09-28 12:00:32.326752] I [client-handshake.c:1423:client_setvolume_cbk] 0-dist-rep-rhevh-client-1: Server and Client lk-version numbers are not same, reopening the fds >[2012-09-28 12:00:32.327385] I [client-handshake.c:453:client_set_lk_version_cbk] 0-dist-rep-rhevh-client-1: Server lk version = 1 >[2012-09-28 12:00:32.327877] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-replicate-rhevh2-replicate-0: Unable to self-heal contents of '<gfid:c2e13dc5-572d-4ee1-8bb9-ce0e32177a74>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-28 12:00:32.330369] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-replicate-rhevh2-replicate-0: Unable to self-heal contents of '<gfid:2844ea62-845e-42ce-b33f-dcb93edb4700>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-28 12:03:58.161106] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection reset by peer) >[2012-09-28 12:03:58.161155] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection reset by peer), peer (10.70.36.33:24007) >[2012-09-28 12:03:58.161343] E [rpc-clnt.c:373:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x3a9f60f818] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x3a9f60f4d0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x3a9f60ef3e]))) 0-dist-rep-rhevh-client-3: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2012-09-28 12:00:25.377204 (xid=0x1x) >[2012-09-28 12:03:58.161372] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 12:03:58.161390] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-28 12:06:29.680706] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 12:06:30.740646] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 12:06:30.743130] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 12:06:35.824330] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 12:06:35.832224] I [glusterfsd-mgmt.c:64:mgmt_cbk_spec] 0-mgmt: Volume file changed >[2012-09-28 12:06:35.832541] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 12:06:35.833145] I [glusterfsd-mgmt.c:1568:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing >[2012-09-28 12:07:32.138115] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection reset by peer) >[2012-09-28 12:07:32.138161] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection reset by peer), peer (10.70.36.33:24007) >[2012-09-28 12:07:32.138233] E [rpc-clnt.c:373:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x3a9f60f818] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x3a9f60f4d0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x3a9f60ef3e]))) 0-dist-rep-rhevh-client-3: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2012-09-28 12:03:59.354031 (xid=0x2x) >[2012-09-28 12:07:32.138254] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 12:07:32.138271] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-28 12:10:32.408142] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-replicate-rhevh2-replicate-0: Unable to self-heal contents of '<gfid:5d255c3d-7d8a-4458-98e4-0762e05bf0a8>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-28 12:10:32.410908] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-replicate-rhevh2-replicate-0: Unable to self-heal contents of '<gfid:c2e13dc5-572d-4ee1-8bb9-ce0e32177a74>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-28 12:10:32.413319] E [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-replicate-rhevh2-replicate-0: Unable to self-heal contents of '<gfid:2844ea62-845e-42ce-b33f-dcb93edb4700>' (possible split-brain). Please delete the file from all but the preferred subvolume. >[2012-09-28 12:11:05.166085] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection reset by peer) >[2012-09-28 12:11:05.166219] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection reset by peer), peer (10.70.36.33:24007) >[2012-09-28 12:11:05.166294] E [rpc-clnt.c:373:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x3a9f60f818] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x3a9f60f4d0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x3a9f60ef3e]))) 0-dist-rep-rhevh-client-3: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2012-09-28 12:07:32.382763 (xid=0x3x) >[2012-09-28 12:11:05.166316] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 12:11:05.166333] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-28 12:14:31.351112] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection reset by peer) >[2012-09-28 12:14:31.351161] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection reset by peer), peer (10.70.36.33:24007) >[2012-09-28 12:14:31.351238] E [rpc-clnt.c:373:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x3a9f60f818] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x3a9f60f4d0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x3a9f60ef3e]))) 0-dist-rep-rhevh-client-3: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2012-09-28 12:11:05.728185 (xid=0x4x) >[2012-09-28 12:14:31.351266] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 12:14:31.351284] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected >[2012-09-28 12:17:58.378153] W [socket.c:195:__socket_rwv] 0-dist-rep-rhevh-client-3: readv failed (Connection reset by peer) >[2012-09-28 12:17:58.378204] W [socket.c:1512:__socket_proto_state_machine] 0-dist-rep-rhevh-client-3: reading from socket failed. Error (Connection reset by peer), peer (10.70.36.33:24007) >[2012-09-28 12:17:58.378281] E [rpc-clnt.c:373:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x3a9f60f818] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x3a9f60f4d0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x3a9f60ef3e]))) 0-dist-rep-rhevh-client-3: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2012-09-28 12:14:32.755397 (xid=0x5x) >[2012-09-28 12:17:58.378326] W [client-handshake.c:1797:client_dump_version_cbk] 0-dist-rep-rhevh-client-3: received RPC status error >[2012-09-28 12:17:58.378353] I [client.c:2090:client_rpc_notify] 0-dist-rep-rhevh-client-3: disconnected
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 861314
: 618437