Description of problem: When the slave volume has less space available than master, it should throw warning message to the user. This used to work before and it seems to be broken now. Version-Release number of selected component (if applicable): glusterfs-3.4.0.39rhs-1.el6rhs.x86_64 How reproducible: Consistently Steps to Reproduce: 1. Create master and slave volume. make sure slave volume has less space than master volume. 2. Run geo-rep create 3. Actual results: It succeeds not throwing error/warning message. Expected results: It should throw error warning messages. Additional info: This used to warn me before. But now it's not warning now.
This is not working with 57geo as well. [root@gauss ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_georep-lv_root 43G 2.2G 39G 6% / tmpfs 1004M 0 1004M 0% /dev/shm /dev/vda1 485M 33M 427M 8% /boot rhsqe-repo.lab.eng.blr.redhat.com:/opt/qa/ 1.9T 148G 1.6T 9% /opt/qa /dev/mapper/RHS_vg1-RHS_lv1 500G 33M 500G 1% /rhs/bricks mustang:master 164G 131M 164G 1% /mnt/master interceptor:slave 82G 66M 82G 1% /mnt/slave [root@spitfire glusterfs-deploy-scripts]# gluster v geo master falcon::slave create Creating geo-replication session between master & falcon::slave has been successful create should fail when the slave has less available size than the master. It used to work but now it's not. Making it urgent and regression. Tested in version: glusterfs-3.4.0.57geo-1.el6rhs.x86_64
https://code.engineering.redhat.com/gerrit/#/c/18503/
This issue is not completely resolved. Tested in Version: glusterfs-3.4.0.58rhs-1.el6rhs.x86_64 This is my df -h [root@gauss ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_georep-lv_root 43G 2.2G 39G 6% / tmpfs 1004M 0 1004M 0% /dev/shm /dev/vda1 485M 33M 427M 8% /boot rhsqe-repo.lab.eng.blr.redhat.com:/opt/qa/ 1.9T 149G 1.6T 9% /opt/qa /dev/mapper/RHS_vg1-RHS_lv1 500G 33M 500G 1% /rhs/bricks pythagoras:master 1000G 11G 989G 2% /mnt/master euler:slave 1000G 66M 1000G 1% /mnt/slave Please note that volume 'master' is actually slave in geo-replication setup and volume 'slave' is actually a geo-replication master. You can see that master volume (actually geo-rep slave) has less available space than slave (actually geo-rep master) [root@euclid ~]# gluster v geo slave archimedes::master create push-pem Total size of master is greater than available size of slave. geo-replication command failed I believe this patch is required for this to work properly. https://code.engineering.redhat.com/gerrit/#/c/19042/1 Moving back to assigned. Please move it to ON_QA when that patch is pushed in.
Per 01/30 Corbett tiger team read-out
Setting flags required to add BZs to RHS 3.0 Errata
As part of upstream patches this issue is resolved. http://review.gluster.org/6746 http://review.gluster.org/6844 Fix is available in the build rhs 3.0.
verified on the build glusterfs-3.6.0.13-1.el6rhs >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_rhsauto006-lv_root 14G 2.6G 11G 20% / tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/vda1 485M 34M 426M 8% /boot /dev/mapper/snap_vg0-snap_thin_vol0 15G 33M 15G 1% /bricks/brick0 /dev/mapper/snap_vg1-snap_thin_vol1 15G 60M 15G 1% /bricks/brick1 /dev/mapper/snap_vg0-snap_thin_vol2 15G 60M 15G 1% /bricks/brick2 /dev/mapper/snap_vg1-snap_thin_vol3 15G 60M 15G 1% /bricks/brick3 10.70.43.170:slave 87G 197M 87G 1% /mnt/slave 10.70.43.100:master 102G 390M 102G 1% /mnt/master [root@redlake ~]# gluster v geo master 10.70.43.170::slave create push-pem Total disk size of master is greater than disk size of slave. Total available size of master is greater than available size of slave geo-replication command failed >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html