Bug 816941 - index xlator should be loaded on brick irrespective of volume type
Summary: index xlator should be loaded on brick irrespective of volume type
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
Assignee: Pranith Kumar K
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-04-27 10:43 UTC by Shwetha Panduranga
Modified: 2015-12-01 16:45 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:17:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Shwetha Panduranga 2012-04-27 10:43:36 UTC
Description of problem:
---------------------------
Add brick doesn't add index xlator to the new graph of the old bricks when a volume type is changed from distribute to replicate or distribute-replicate volume. 

Version-Release number of selected component (if applicable):
mainline

How reproducible:
often

create_dirs.sh:
-----------------
#!/bin/bash

mountpoint=`pwd`
main_dir="$mountpoint/deep_dirs"

mkdir $main_dir
cd $main_dir

for i in {1..50}; do
	level1_dir="$main_dir/l1_dir.$i"
	echo "#########################################################################################"
	echo "creating directory: $level1_dir"
	mkdir $level1_dir
	cd $level1_dir
	for j in {1..25};do
		level2_dir="$level1_dir/l2.dir.$j"
		echo "---------------------------------------------------------------------------------"
		echo "creating directory: $level2_dir"
		mkdir $level2_dir
		cd $level2_dir
		for k in {1..10};do
			echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
			echo "creating file: file.$k"
			dd if=/dev/zero of=file.$k bs=1M count=$k
		done
		echo "---------------------------------------------------------------------------------"
		cd $level1_dir
	done
	echo "#########################################################################################"
	cd $main_dir
done
cd $mountpoint

Steps to Reproduce:
----------------------
1.create a distribute volume with 1 brick (brick1)
2.create fuse mount. execute "create_dirs.sh" on fuse mount. 
3.execute "gluster volume add-brick <volume_name> replica 2 <new_brick>"

Additional Info:-
------------------
1) brick1 log doesn't show the index translator in it's new graph. But the volume file has the new graph. 

brick1 log when new-brick is added:-
---------------------------------------
  1: volume dstore-posix
  2:     type storage/posix
  3:     option directory /export1/dstore1
  4:     option volume-id 13b47c0b-63fe-4c70-bd53-a00f0bb692b8
  5: end-volume
  6: 
  7: volume dstore-access-control
  8:     type features/access-control
  9:     subvolumes dstore-posix
 10: end-volume
 11: 
 12: volume dstore-locks
 13:     type features/locks
 14:     subvolumes dstore-access-control
 15: end-volume
 16: 
 17: volume dstore-io-threads
 18:     type performance/io-threads
 19:     subvolumes dstore-locks
 20: end-volume
 21: 
 22: volume dstore-marker
 23:     type features/marker
 24:     option volume-uuid 13b47c0b-63fe-4c70-bd53-a00f0bb692b8
 25:     option timestamp-file /etc/glusterd/vols/dstore/marker.tstamp
 26:     option xtime off
 27:     option quota off
 28:     subvolumes dstore-io-threads
 29: end-volume
 30: 
 31: volume /export1/dstore1
 32:     type debug/io-stats
 33:     option latency-measurement off
 34:     option count-fop-hits off
 35:     subvolumes dstore-marker
 36: end-volume
 37: 
 38: volume dstore-server
 39:     type protocol/server
 40:     option transport-type tcp
 41:     option auth.login./export1/dstore1.allow b2a0c7a8-ba61-49e0-a43c-b32f149f1076
 42:     option auth.login.b2a0c7a8-ba61-49e0-a43c-b32f149f1076.password c47590ba-d287-46f5-a272-83ea8a4ac4ce
 43:     option auth.addr./export1/dstore1.allow *
 44:     subvolumes /export1/dstore1
 45: end-volume

brick log of newly added-brick:-
----------------------------------
  1: volume dstore-posix
  2:     type storage/posix
  3:     option directory /export1/dstore1
  4:     option volume-id 13b47c0b-63fe-4c70-bd53-a00f0bb692b8
  5: end-volume
  6: 
  7: volume dstore-access-control
  8:     type features/access-control
  9:     subvolumes dstore-posix
 10: end-volume
 11: 
 12: volume dstore-locks
 13:     type features/locks
 14:     subvolumes dstore-access-control
 15: end-volume
 16: 
 17: volume dstore-io-threads
 18:     type performance/io-threads
 19:     subvolumes dstore-locks
 20: end-volume
 21: 
 22: volume dstore-index
 23:     type features/index
 24:     option index-base /export1/dstore1/.glusterfs/indices
 25:     subvolumes dstore-io-threads
 26: end-volume
 27: 
 28: volume dstore-marker
 29:     type features/marker
 30:     option volume-uuid 13b47c0b-63fe-4c70-bd53-a00f0bb692b8
 31:     option timestamp-file /etc/glusterd/vols/dstore/marker.tstamp
 32:     option xtime off
 33:     option quota off
 34:     subvolumes dstore-index
 35: end-volume
 36: 
 37: volume /export1/dstore1
 38:     type debug/io-stats
 39:     option latency-measurement off
 40:     option count-fop-hits off
 41:     subvolumes dstore-marker
 42: end-volume
 43: 
 44: volume dstore-server
 45:     type protocol/server
 46:     option transport-type tcp
 47:     option auth.login./export1/dstore1.allow b2a0c7a8-ba61-49e0-a43c-b32f149f1076
 48:     option auth.login.b2a0c7a8-ba61-49e0-a43c-b32f149f1076.password c47590ba-d287-46f5-a272-83ea8a4ac4ce
 49:     option auth.addr./export1/dstore1.allow *
 50:     subvolumes /export1/dstore1
 51: end-volume

+------------------------------------------------------------------------------+

Comment 1 Anand Avati 2012-05-08 11:09:38 UTC
CHANGE: http://review.gluster.com/3239 (mgmt/gluster: Load index xlator on brick always) merged in master by Vijay Bellur (vijay)

Comment 2 Shwetha Panduranga 2012-05-12 06:16:15 UTC
Bug is fixed . verified on 3.3.0qa41


Note You need to log in before you can comment on or make changes to this bug.