Hide Forgot
I'm using a system with no peers and volumes of single brick, so we'd expect no communication overhead. gluster volume geo-replication status -- 3.2 sec: tolerable. gluster volume geo-replication start -- 9.5 sec: hmmmm. gluster volume geo-replication stop -- 26.6 sec: eeeeeeeeeeeeeeeeeeeeeeeek. Note that it's only on my system, on aws we do get ~3 sec runtimes for each operation. As otherwise my system performs well, still an interesting question why so.
PATCH: http://patches.gluster.com/patch/6922 in master (syncdaemon: load xattrs from libc on-demand)
PATCH: http://patches.gluster.com/patch/6923 in master (glusterd: refactor gsync_status() so that we can get at the pidfile)
PATCH: http://patches.gluster.com/patch/6924 in master (glusterd: some cleanups needed for 70adbe7b [refactor gsync_status() ...])
tested with 3.2.0qa14 on aws. --- # time gluster volume geo-replication stop beta1 root.compute.amazonaws.com::slave geo-replication session stopped successfully real 0m1.626s user 0m0.007s sys 0m0.010s [root@ip-10-170-205-102 mntpt]# time gluster volume geo-replication start beta1 root.compute.amazonaws.com::slave geo-replication session started Successfully real 0m1.061s user 0m0.005s sys 0m0.011s -----
> geo-replication session stopped successfully ... > geo-replication session started Successfully Can someone fix the cases? This looks very unpolished. Avati
(In reply to comment #6) > > geo-replication session stopped successfully > ... > > geo-replication session started Successfully > > Can someone fix the cases? This looks very unpolished. > > Avati Kaushik has already taken that up.