Description of problem: when we do snapshot create, we usually try to fetch the brick details and try to set the file system type for snapshot bricks. glusterd can get the file system details of local node, But it fails to get file system type of other nodes. Because of this an error is reported in log. Version-Release number of selected component (if applicable): How reproducible: 1/1 Steps to Reproduce: 1. Create a distributed volume. 2. Try to take a snapshot 3. Check the log. Actual results: glusterd fails to update file system type of other nodes. Expected results: Check and updation should only be made on local node. Additional info: ----------------------------------------------------------------------------- [root@snapshot-24 glusterfs]# gluster v i Volume Name: vol1 Type: Distribute Volume ID: 805ad7b5-45a3-4cd5-8e5b-dea8b08fc4b8 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.70.43.192:/brick0/b0 Brick2: 10.70.43.75:/brick4/b4 Options Reconfigured: features.barrier: disable snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable [root@snapshot-24 glusterfs]# gluster snapshot create snap1 vol1 Log on Node 10.70.43.192 ------------------------ [2014-06-19 00:05:43.169712] E [glusterd-snapshot.c:3800:glusterd_update_fstype] 0-management: getting the root of th e brick (/brick4/b4) failed [2014-06-19 00:05:43.169780] E [glusterd-snapshot.c:3876:glusterd_add_brick_to_snap_volume] 0-management: Failed to u pdate file-system type for brick [2014-06-19 00:05:44.594857] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-06-19 00:05:44.615626] I [socket.c:2246:socket_event_handler] 0-transport: disconnecting now [2014-06-19 00:05:44.616013] I [MSGID: 106005] [glusterd-handler.c:4150:__glusterd_brick_rpc_notify] 0-management: Br ick 10.70.43.192:/var/run/gluster/snaps/3b026a06950e4a69881d9cb8e6ded4a9/brick1/b0 has disconnected from glusterd. [2014-06-19 00:05:44.627136] W [glusterd-utils.c:1558:glusterd_snap_volinfo_find] 0-management: Snap volume 3b026a069 50e4a69881d9cb8e6ded4a9.10.70.43.192.var-run-gluster-snaps-3b026a06950e4a69881d9cb8e6ded4a9-brick1-b0 not found [2014-06-19 00:05:44.644145] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /var/run/gluster/snaps/3 b026a06950e4a69881d9cb8e6ded4a9/brick1/b0 on port 49154 Log on Node 10.70.43.75: [2014-06-19 00:05:39.227389] E [glusterd-snapshot.c:3800:glusterd_update_fstype] 0-management: getting the root of th e brick (/brick0/b0) failed [2014-06-19 00:05:39.227450] E [glusterd-snapshot.c:3876:glusterd_add_brick_to_snap_volume] 0-management: Failed to u pdate file-system type for brick [2014-06-19 00:05:40.581735] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-06-19 00:05:40.603641] I [socket.c:2246:socket_event_handler] 0-transport: disconnecting now [2014-06-19 00:05:40.606169] I [MSGID: 106005] [glusterd-handler.c:4150:__glusterd_brick_rpc_notify] 0-management: Br ick 10.70.43.75:/var/run/gluster/snaps/305b08000bb740939d057f3d140583f1/brick2/b4 has disconnected from glusterd. [2014-06-19 00:05:40.606529] W [glusterd-utils.c:1558:glusterd_snap_volinfo_find] 0-management: Snap volume 305b08000 bb740939d057f3d140583f1.10.70.43.75.var-run-gluster-snaps-305b08000bb740939d057f3d140583f1-brick2-b4 not found [2014-06-19 00:05:40.623401] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /var/run/gluster/snaps/3 05b08000bb740939d057f3d140583f1/brick2/b4 on port 49153 ----------------------------------------------------------------------
REVIEW: http://review.gluster.org/8272 (glusterd/snapshot: Update fstype for local bricks only) posted (#1) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/8272 (glusterd/snapshot: Update fstype for local bricks only) posted (#2) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/8272 (glusterd/snapshot: Update fstype for local bricks only) posted (#3) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/8272 (glusterd/snapshot: Update fstype for local bricks only) posted (#4) for review on master by Krishnan Parthasarathi (kparthas)
COMMIT: http://review.gluster.org/8272 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 23455c034a95df2be900f0f83515f2a22c5dea8e Author: Avra Sengupta <asengupt> Date: Wed Jul 9 09:40:42 2014 +0000 glusterd/snapshot: Update fstype for local bricks only While creating snapshot, update fstype for local bricks only and not for bricks hosted on other nodes Also returning ret as 0, in case no cleanup is required in post-validation, so that a post-validation failure is not logged, every time a pre-validation failure happens. Change-Id: I6364e33cfd9528e0a988ee48f3443239ee884336 BUG: 1111060 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/8272 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users