Bug 1701039

Summary: gluster replica 3 arbiter Unfortunately data not distributed equally
Product: [Community] GlusterFS Reporter: Eng Khalid Jamal <engkhalid21986>
Component: distributeAssignee: Susant Kumar Palai <spalai>
Status: CLOSED NOTABUG QA Contact:
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6CC: bugs, ksubrahm, rhs-bugs, sankarshan, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-23 06:25:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
volume configuration and free size in each brick none

Description Eng Khalid Jamal 2019-04-17 21:17:37 UTC
Created attachment 1556031 [details]
volume configuration and free size in each brick

Description of problem:
i create gluster volume replica 3 with arbiter four disk in three server , when i check the size of volume i see only one brick on server one and server two was fully but other brick is free , what happen can any one explain to me if forget something my configuration is :

gluster volume create gv0 replica 3 arbiter 1 gfs1:/data/brick1/gv0 gfs2:/data/brick1/gv0 gfs3:/data/brick1/gv0 gfs1:/data/brick2/gv0 gfs2:/data/brick2/gv0 gfs3:/data/brick2/gv0 gfs1:/data/brick3/gv0 gfs2:/data/brick3/gv0 gfs3:/data/brick3/gv0 gfs1:/data/brick4/gv0 gfs2:/data/brick4/gv0 gfs3:/data/brick4/gv0

i add this volume as master storage domain inside ovirt virtual server

Version-Release number of selected component (if applicable):

glusterfs 6.0

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Karthik U S 2019-04-18 06:26:58 UTC
Since you are using glusterfs 6.0, I am changing the product & version accordingly and since it is a distribution issue changing the component as well and assigning it to the right person.

Comment 3 Susant Kumar Palai 2019-04-22 15:24:40 UTC
Will need the following initial pieces of information to analyze the issue.
1- gluster volume info 
2- run the following command on the root of all the bricks on all servers.
   "getfattr -m . -de hex <full_brick_path>"
3- disk usage of each brick

Susant

Comment 4 Eng Khalid Jamal 2019-04-23 06:23:06 UTC
(In reply to Susant Kumar Palai from comment #3)
> Will need the following initial pieces of information to analyze the issue.
> 1- gluster volume info 
> 2- run the following command on the root of all the bricks on all servers.
>    "getfattr -m . -de hex <full_brick_path>"
> 3- disk usage of each brick
> 
> Susant

Thank you susant 

i find solution for my issue, my issue is because i forget when i create my volume i did not enable sharding feature for that the data not distributed to all brick equally , now i solve my issue like below :

1- move my all vm disk to another storage domain.
2- put my disk in maintenance mode.
3- stopped my storage doamin .
4-in here i have two steps one of th is remove all storage and creating it again or i can just enable the sharding options in here i do the second choice because of my storage dos not have huge data .
5-starting my storage domain . .
6- now the data distributing to all brick correctly .

thanks again

Comment 5 Susant Kumar Palai 2019-04-23 07:10:30 UTC
(In reply to Eng Khalid Jamal from comment #4)
> (In reply to Susant Kumar Palai from comment #3)
> > Will need the following initial pieces of information to analyze the issue.
> > 1- gluster volume info 
> > 2- run the following command on the root of all the bricks on all servers.
> >    "getfattr -m . -de hex <full_brick_path>"
> > 3- disk usage of each brick
> > 
> > Susant
> 
> Thank you susant 
> 
> i find solution for my issue, my issue is because i forget when i create my
> volume i did not enable sharding feature for that the data not distributed
> to all brick equally , now i solve my issue like below :
> 
> 1- move my all vm disk to another storage domain.
> 2- put my disk in maintenance mode.
> 3- stopped my storage doamin .
> 4-in here i have two steps one of th is remove all storage and creating it
> again or i can just enable the sharding options in here i do the second
> choice because of my storage dos not have huge data .
> 5-starting my storage domain . .
> 6- now the data distributing to all brick correctly .
> 
> thanks again

Great! Thanks for the update.