Description of problem: When 2 Volumes are being mounted to 2 clients in the same config files,Correct volume is not mounting to respective client as mentioned in config file i.e only one volume is being mounted to both the clients on all the mount points provided in config Version-Release number of selected component (if applicable): # rpm -qa | grep gdeploy gdeploy-2.0.1-8.el7rhgs.noarch How reproducible: Different behaviour on different setups Steps to Reproduce: 1.Install Gdeploy 2.Create and start 2 volumes 3.Mount on client via gdeploy Config- [hosts] 10.70.37.131 10.70.37.95 10.70.37.85 [clients-1] action=mount volname=disrt-replica1 hosts=10.70.37.192,10.70.37.142 fstype=glusterfs client_mount_points=/mnt/fuse1,/mnt/fuse2 [clients-2] action=mount volname=disrt-replica2 hosts=10.70.37.192,10.70.37.142 fstype=glusterfs client_mount_points=/mnt/repli1,/mnt/repli2 ----------- output ----------- # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel_dhcp37--192-root 17G 1.6G 16G 10% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 1014M 184M 831M 19% /boot tmpfs 380M 0 380M 0% /run/user/0 10.70.37.131:disrt-replica2 90G 100M 90G 1% /mnt/fuse1 10.70.37.131:disrt-replica2 90G 100M 90G 1% /mnt/repli1 # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel_dhcp37--142-root 17G 1.6G 16G 10% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 1014M 184M 831M 19% /boot tmpfs 380M 0 380M 0% /run/user/0 10.70.37.131:disrt-replica2 90G 100M 90G 1% /mnt/fuse2 10.70.37.131:disrt-replica2 90G 100M 90G 1% /mnt/repli2 Actual results: disrt-replica2 volume is being mounted to both the client on all the mount points provided in conf file Expected results: respective volume should be mounted to respective client as mentioned in config file Additional info:
Commit: https://github.com/gluster/gdeploy/commit/57cce965ac fixes the issue. This was due to stale entries in the host_vars files.
Verified this bug on # rpm -qa | grep gdeploy gdeploy-2.0.2-2.el7rhgs.noarch # cat multiple_mount.conf [hosts] dhcp47-147.lab.eng.blr.redhat.com dhcp47-141.lab.eng.blr.redhat.com dhcp47-144.lab.eng.blr.redhat.com dhcp47-139.lab.eng.blr.redhat.com dhcp47-132.lab.eng.blr.redhat.com [clients-1] action=mount volname=ganesha hosts=10.70.37.192,10.70.37.142 fstype=glusterfs client_mount_points=/mnt/ms_ganesha1,/mnt/ms_ganesha2 [clients-2] action=mount volname=ganesha1 hosts=10.70.37.192,10.70.37.142 client_mount_points=/mnt/nfs1,/mnt/nfs2 fstype=glusterfs ------ [root@dhcp37-192 mnt]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/rhel_dhcp37--192-root xfs 17G 2.7G 15G 16% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 184M 831M 19% /boot rhsqe-repo.lab.eng.blr.redhat.com:/opt nfs4 1.9T 366G 1.4T 21% /opt tmpfs tmpfs 380M 0 380M 0% /run/user/0 dhcp47-147.lab.eng.blr.redhat.com:ganesha fuse.glusterfs 120G 211M 120G 1% /mnt/ms_ganesha1 dhcp47-147.lab.eng.blr.redhat.com:ganesha1 fuse.glusterfs 120G 211M 120G 1% /mnt/nfs1 ------- [root@dhcp37-142 mnt]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/rhel_dhcp37--142-root xfs 17G 1.6G 16G 10% / devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 184M 831M 19% /boot tmpfs tmpfs 380M 0 380M 0% /run/user/0 dhcp47-147.lab.eng.blr.redhat.com:ganesha fuse.glusterfs 120G 211M 120G 1% /mnt/ms_ganesha2 dhcp47-147.lab.eng.blr.redhat.com:ganesha1 fuse.glusterfs 120G 211M 120G 1% /mnt/nfs2
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2777