Bug 1564366 - [GSS] Rebalance process consuming High CPU Utilization after new brick added.
Summary: [GSS] Rebalance process consuming High CPU Utilization after new brick added.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: ---
Assignee: Raghavendra G
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-06 05:01 UTC by WenhanShi
Modified: 2018-04-19 22:54 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-19 22:54:52 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description WenhanShi 2018-04-06 05:01:19 UTC
Description of problem:
After added a new brick to one replication volume. Rebalance process is consuming hugh CPU resource. Also the new brick process(glusterfsd) consuming as same as the rebalance process.

Version-Release number of selected component (if applicable):
gluster-nagios-addons-0.2.7-1.el7rhgs.x86_64                Wed Oct  5 20:24:07 2016
gluster-nagios-common-0.2.4-1.el7rhgs.noarch                Wed Oct  5 20:23:36 2016
glusterfs-3.7.9-12.el7rhgs.x86_64                           Wed Oct  5 20:23:06 2016
glusterfs-api-3.7.9-12.el7rhgs.x86_64                       Wed Oct  5 20:23:10 2016
glusterfs-cli-3.7.9-12.el7rhgs.x86_64                       Wed Oct  5 20:23:10 2016
glusterfs-client-xlators-3.7.9-12.el7rhgs.x86_64            Wed Oct  5 20:23:10 2016
glusterfs-fuse-3.7.9-12.el7rhgs.x86_64                      Wed Oct  5 20:23:10 2016
glusterfs-ganesha-3.7.9-12.el7rhgs.x86_64                   Wed Oct  5 20:24:05 2016
glusterfs-geo-replication-3.7.9-12.el7rhgs.x86_64           Wed Oct  5 20:24:02 2016
glusterfs-libs-3.7.9-12.el7rhgs.x86_64                      Wed Oct  5 20:23:05 2016
glusterfs-rdma-3.7.9-12.el7rhgs.x86_64                      Wed Oct  5 20:25:35 2016
glusterfs-server-3.7.9-12.el7rhgs.x86_64                    Wed Oct  5 20:24:02 2016
nfs-ganesha-gluster-2.3.1-8.el7rhgs.x86_64                  Wed Oct  5 20:24:05 2016
python-gluster-3.7.9-12.el7rhgs.noarch                      Wed Oct  5 20:23:31 2016
samba-vfs-glusterfs-4.4.5-3.el7rhgs.x86_64                  Wed Oct  5 20:23:50 2016
vdsm-gluster-4.17.33-1.el7rhgs.noarch                       Wed Oct  5 20:24:06 2016


How reproducible:
N/A

Steps to Reproduce:
1. N/A
2.
3.

Actual results:
CPU utilization is high

Expected results:
CPU utilization is normal

Additional info:
There are a lot of failures from rebalance status and lots errors "Migrate file failed ... file has hardlinks" in rebalance log.

Comment 8 Amar Tumballi 2018-04-06 10:05:30 UTC
> For migrate failures, if the file with hardlink will not be migrated, is it OK ignore the error entry in rebalance log and failure count in rebalance status?
> 
> For what I think is this should not be shown as an " E " if this is expected.

Yes it is OK. I agree it shouldn't be in Error log. But the reason it was left as is, is because in case of 'remove-brick' the same path is hit, and in that case, the hardlinks also should be migrated.

It is Good to tell customer, that this log is ignore-able.


Note You need to log in before you can comment on or make changes to this bug.