Bug 855765 - glusterfs 3.3.0 2 replica high cpu load [NEEDINFO]
glusterfs 3.3.0 2 replica high cpu load
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
unspecified
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Pranith Kumar K
storage-qa-internal@redhat.com
:
: 855767 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-10 04:28 EDT by cartment
Modified: 2016-09-19 18:06 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-03 22:49:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
jdarcy: needinfo? (283167932)
jdarcy: needinfo? (283167932)


Attachments (Terms of Use)

  None (edit)
Description cartment 2012-09-10 04:28:57 EDT
I have 2 gluster-servers(gfs0 192.168.190.56 and gfs1 192.168.190.57) to make a replication
the gfs0 have 1.3T datas  gfs1 is empty

[root@gfs0 tmp]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            4.4G  3.5G  662M  85% /
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/xvdb             2.2T  1.3T  802G  62% /d/xvdb

[root@gfs2 tmp]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda3            7.8G  1.8G  5.6G  25% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/xvda1            194M   28M  157M  15% /boot
/dev/xvdb             2.2T  124M  2.1T   1% /d/xvdb

i have created a volume named ot-gfs on gfs0 then i need to add another brick

[root@gfs0 tmp]#gluster volume  add-brick ot-gfs  replica 2 192.168.203.57:/d/xvdb/export/

[root@gfs0 tmp]# gluster volume info all

Volume Name: ot-gfs
Type: Replicate
Volume ID: 3268628f-4954-4e35-9bdc-28345655b643
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.190.56:/d/xvdb/export
Brick2: 192.168.190.57:/d/xvdb/export
Options Reconfigured:
cluster.self-heal-daemon: off


finally the gfs0  cpu load is very high
Comment 2 Amar Tumballi 2012-09-10 05:01:43 EDT
*** Bug 855767 has been marked as a duplicate of this bug. ***
Comment 3 Jeff Darcy 2012-10-15 15:56:29 EDT
How high is "very high"?  Is the high CPU load interfering with anything else?  I'd love to make self-heal faster, but without specifics it's not clear how to prioritize that vs. other potential improvements.
Comment 4 Pranith Kumar K 2016-01-03 22:49:21 EST
Closing the bug as enough information is not available and the version 3.3.0 is not supported anymore. Please feel free to raise the bug if you observe the same problem on any of the versions >= 3.5.x of gluster.

Pranith

Note You need to log in before you can comment on or make changes to this bug.