Bug 1701039 - gluster replica 3 arbiter Unfortunately data not distributed equally
Summary: gluster replica 3 arbiter Unfortunately data not distributed equally
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 6
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-17 21:17 UTC by Eng Khalid Jamal
Modified: 2019-04-23 07:10 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-23 06:25:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
volume configuration and free size in each brick (163.96 KB, application/vnd.rar)
2019-04-17 21:17 UTC, Eng Khalid Jamal
no flags Details

Description Eng Khalid Jamal 2019-04-17 21:17:37 UTC
Created attachment 1556031 [details]
volume configuration and free size in each brick

Description of problem:
i create gluster volume replica 3 with arbiter four disk in three server , when i check the size of volume i see only one brick on server one and server two was fully but other brick is free , what happen can any one explain to me if forget something my configuration is :

gluster volume create gv0 replica 3 arbiter 1 gfs1:/data/brick1/gv0 gfs2:/data/brick1/gv0 gfs3:/data/brick1/gv0 gfs1:/data/brick2/gv0 gfs2:/data/brick2/gv0 gfs3:/data/brick2/gv0 gfs1:/data/brick3/gv0 gfs2:/data/brick3/gv0 gfs3:/data/brick3/gv0 gfs1:/data/brick4/gv0 gfs2:/data/brick4/gv0 gfs3:/data/brick4/gv0

i add this volume as master storage domain inside ovirt virtual server

Version-Release number of selected component (if applicable):

glusterfs 6.0

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Karthik U S 2019-04-18 06:26:58 UTC
Since you are using glusterfs 6.0, I am changing the product & version accordingly and since it is a distribution issue changing the component as well and assigning it to the right person.

Comment 3 Susant Kumar Palai 2019-04-22 15:24:40 UTC
Will need the following initial pieces of information to analyze the issue.
1- gluster volume info 
2- run the following command on the root of all the bricks on all servers.
   "getfattr -m . -de hex <full_brick_path>"
3- disk usage of each brick

Susant

Comment 4 Eng Khalid Jamal 2019-04-23 06:23:06 UTC
(In reply to Susant Kumar Palai from comment #3)
> Will need the following initial pieces of information to analyze the issue.
> 1- gluster volume info 
> 2- run the following command on the root of all the bricks on all servers.
>    "getfattr -m . -de hex <full_brick_path>"
> 3- disk usage of each brick
> 
> Susant

Thank you susant 

i find solution for my issue, my issue is because i forget when i create my volume i did not enable sharding feature for that the data not distributed to all brick equally , now i solve my issue like below :

1- move my all vm disk to another storage domain.
2- put my disk in maintenance mode.
3- stopped my storage doamin .
4-in here i have two steps one of th is remove all storage and creating it again or i can just enable the sharding options in here i do the second choice because of my storage dos not have huge data .
5-starting my storage domain . .
6- now the data distributing to all brick correctly .

thanks again

Comment 5 Susant Kumar Palai 2019-04-23 07:10:30 UTC
(In reply to Eng Khalid Jamal from comment #4)
> (In reply to Susant Kumar Palai from comment #3)
> > Will need the following initial pieces of information to analyze the issue.
> > 1- gluster volume info 
> > 2- run the following command on the root of all the bricks on all servers.
> >    "getfattr -m . -de hex <full_brick_path>"
> > 3- disk usage of each brick
> > 
> > Susant
> 
> Thank you susant 
> 
> i find solution for my issue, my issue is because i forget when i create my
> volume i did not enable sharding feature for that the data not distributed
> to all brick equally , now i solve my issue like below :
> 
> 1- move my all vm disk to another storage domain.
> 2- put my disk in maintenance mode.
> 3- stopped my storage doamin .
> 4-in here i have two steps one of th is remove all storage and creating it
> again or i can just enable the sharding options in here i do the second
> choice because of my storage dos not have huge data .
> 5-starting my storage domain . .
> 6- now the data distributing to all brick correctly .
> 
> thanks again

Great! Thanks for the update.


Note You need to log in before you can comment on or make changes to this bug.