| Summary: | Rebalance : Files are rebalanced everytime, when rebalance is started after add brick and starting rebalance multiple times while I /O is in progress on the client | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | senaik |
| Component: | distribute | Assignee: | Nithya Balachandran <nbalacha> |
| Status: | CLOSED WONTFIX | QA Contact: | storage-qa-internal <storage-qa-internal> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 2.1 | CC: | rhs-bugs, spalai, vbellur |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-11-27 12:10:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Description of problem: ======================== On adding brick when I/O is in progress on the client , files are rebalanced everytime rebalance is started Version-Release number of selected component (if applicable): ============================================================= glusterfs 3.4.0.30rhs How reproducible: ================= not tried Steps to Reproduce: ==================== 1.Create a distribute volume with 4 bricks 2.Fuse /NFS mount the volume and create a directory and create some files for i in {1..500} ; do dd if=/dev/urandom of=f"$i" bs=10M count=1; done 3.While file creation is in progress , add 2 bricks to the volume and start rebalance and check rebalance status gluster v rebalance vol1 start gluster v rebalance vol1 status Node Rebalanced-files size scanned failures skipped status run time in secs ---- ---------------- ---- ------- ------- -------- ----- --------------- localhost 0 0Bytes 43 0 8 completed 0.00 10.70.34.86 4 40.0MB 44 0 4 completed 1.00 10.70.34.89 0 0Bytes 43 0 0 completed 1.00 10.70.34.88 9 90.0MB 56 0 2 completed 1.00 10.70.34.87 9 90.0MB 48 0 7 completed 2.00 volume rebalance: vol1: success: 4. Execute Rebalance start force command gluster v rebalance vol1 start force Node Rebalanced-files size scanned failures skipped status run time in secs ---- ---------------- ---- ------- ------- -------- ----- --------------- localhost 11 110.0MB 146 0 0 completed 2.00 10.70.34.86 14 140.0MB 152 0 0 completed 3.00 10.70.34.89 0 0Bytes 132 0 0 completed 0.00 10.70.34.88 6 60.0MB 138 0 0 completed 1.00 10.70.34.87 12 120.0MB 155 0 0 completed 2.00 volume rebalance: vol1: success: 5. Start rebalance again (I/O is still in progress on client ) and check status Node Rebalanced-files size scanned failures skipped status run time in secs ---- ---------------- ---- ------- ------- -------- ----- --------------- localhost 0 0Bytes 257 0 19 completed 1.00 10.70.34.86 11 110.0MB 254 0 31 completed 3.00 10.70.34.89 0 0Bytes 257 0 0 completed 0.00 10.70.34.88 13 130.0MB 196 0 0 in progress 3.00 10.70.34.87 5 50.0MB 259 0 22 completed 2.00 volume rebalance: vol1: success: Actual results: =============== Files are rebalanced everytime rebalance is started Expected results: ================ After adding brick and starting rebalance , the layout has already been fixed , so even when file creation is still in progress on the client , files must now hash to the existing bricks (since new bricks have not been added) and files should not be rebalancing again Additional info: ================ [root@boost brick1]# gluster v i vol1 Volume Name: vol1 Type: Distribute Volume ID: 3e7b0869-77f9-4a4a-a148-ab9402e79add Status: Started Number of Bricks: 6 Transport-type: tcp Bricks: Brick1: 10.70.34.85:/rhs/brick1/a1 Brick2: 10.70.34.86:/rhs/brick1/a2 Brick3: 10.70.34.87:/rhs/brick1/a3 Brick4: 10.70.34.88:/rhs/brick1/a4 Brick5: 10.70.34.89:/rhs/brick1/a5 Brick6: 10.70.34.88:/rhs/brick1/a6