I have 2 gluster-servers(gfs0 192.168.190.56 and gfs1 192.168.190.57) to make a replication the gfs0 have 1.3T datas gfs1 is empty [root@gfs0 tmp]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 4.4G 3.5G 662M 85% / tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/xvdb 2.2T 1.3T 802G 62% /d/xvdb [root@gfs2 tmp]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda3 7.8G 1.8G 5.6G 25% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/xvda1 194M 28M 157M 15% /boot /dev/xvdb 2.2T 124M 2.1T 1% /d/xvdb i have created a volume named ot-gfs on gfs0 then i need to add another brick [root@gfs0 tmp]#gluster volume add-brick ot-gfs replica 2 192.168.203.57:/d/xvdb/export/ [root@gfs0 tmp]# gluster volume info all Volume Name: ot-gfs Type: Replicate Volume ID: 3268628f-4954-4e35-9bdc-28345655b643 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.190.56:/d/xvdb/export Brick2: 192.168.190.57:/d/xvdb/export Options Reconfigured: cluster.self-heal-daemon: off finally the gfs0 cpu load is very high
*** Bug 855767 has been marked as a duplicate of this bug. ***
How high is "very high"? Is the high CPU load interfering with anything else? I'd love to make self-heal faster, but without specifics it's not clear how to prioritize that vs. other potential improvements.
Closing the bug as enough information is not available and the version 3.3.0 is not supported anymore. Please feel free to raise the bug if you observe the same problem on any of the versions >= 3.5.x of gluster. Pranith