Bug 855767 - glusterfs 3.3.0 2 replica high cpu load
glusterfs 3.3.0 2 replica high cpu load
Status: CLOSED DUPLICATE of bug 855765
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Amar Tumballi
Sudhir D
Depends On:
  Show dependency treegraph
Reported: 2012-09-10 04:31 EDT by cartment
Modified: 2013-12-18 19:08 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-09-10 05:01:43 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description cartment 2012-09-10 04:31:40 EDT
I have 2 gluster-servers(gfs0 and gfs1 to make a replication
the gfs0 have 1.3T datas  gfs1 is empty

[root@gfs0 tmp]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            4.4G  3.5G  662M  85% /
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/xvdb             2.2T  1.3T  802G  62% /d/xvdb

[root@gfs2 tmp]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda3            7.8G  1.8G  5.6G  25% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/xvda1            194M   28M  157M  15% /boot
/dev/xvdb             2.2T  124M  2.1T   1% /d/xvdb

i have created a volume named ot-gfs on gfs0 then i need to add another brick

glusterfs server  CentOS-6.0 x86_64 (replica 2) 2.6.32-220.el6.x86_6

[root@gfs0 tmp]#gluster volume  add-brick ot-gfs  replica 2

[root@gfs0 tmp]# gluster volume info all

Volume Name: ot-gfs
Type: Replicate
Volume ID: 3268628f-4954-4e35-9bdc-28345655b643
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Options Reconfigured:
cluster.self-heal-daemon: off

finally the gfs0  cpu load is very high
Comment 2 Amar Tumballi 2012-09-10 05:01:43 EDT

*** This bug has been marked as a duplicate of bug 855765 ***

Note You need to log in before you can comment on or make changes to this bug.