Description of problem: When shared storage volume is not mounted on one of the storage node, and if you check the snap_scheduler.py status on that particular node, it shows "snap_scheduler: Snapshot scheduling status: Disabled" even if its enabled on other storage nodes Version-Release number of selected component (if applicable): [root@localhost upsteam_build]# snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Enabled [root@localhost upsteam_build]# rpm -qa | grep glusterfs glusterfs-fuse-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-rdma-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-libs-3.7dev-0.910.git17827de.el6.x86_64 samba-glusterfs-3.6.509-169.4.el6rhs.x86_64 glusterfs-api-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-geo-replication-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-cli-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-server-3.7dev-0.910.git17827de.el6.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distributed replicate volume 2. create shared storage and mount on each storage node 3. run snap_scheduler.py init command on each storage node 4. run snap_scheduler.py enable on single storage node 5 unmount shared storage volume on one of the storage node 6 Check snap_scheduler.py status on each storage node Actual results: snap_scheduler.py status shows disable on storage node where shared storage is not mounted. Expected results: Scheduler should display message shared storage is not mounted. Additional info: [root@localhost upsteam_build]# gluster v info Volume Name: meta Type: Distributed-Replicate Volume ID: bb7646be-8096-4cb1-a836-6d8c7ef7a075 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick2/mb1 Brick2: 10.70.47.145:/rhs/brick2/mb2 Brick3: 10.70.47.150:/rhs/brick2/mb3 Brick4: 10.70.47.151:/rhs/brick2/mb4 Options Reconfigured: auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: vol0 Type: Distributed-Replicate Volume ID: 176c3b9a-6f08-434e-bc35-c4ef20dd0bcf Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Options Reconfigured: features.barrier: disable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256
REVIEW: http://review.gluster.org/10135 (snapshot/scheduler: Check if shared storage is mounted.) posted (#1) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10135 (snapshot/scheduler: Check if shared storage is mounted.) posted (#2) for review on master by Niels de Vos (ndevos)
REVIEW: http://review.gluster.org/10135 (snapshot/scheduler: Only run if shared storage is mounted) posted (#3) for review on master by Rajesh Joseph (rjoseph)
REVIEW: http://review.gluster.org/10135 (snapshot/scheduler: Only run if shared storage is mounted) posted (#4) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10135 (snapshot/scheduler: Only run if shared storage is mounted) posted (#5) for review on master by Avra Sengupta (asengupt)
COMMIT: http://review.gluster.org/10135 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 3f21a347932d741de24bccffb761689c5b368e7e Author: Avra Sengupta <asengupt> Date: Mon Apr 6 14:34:45 2015 +0530 snapshot/scheduler: Only run if shared storage is mounted Before running any snapshot scheduler op command, verify if /var/run/gluster/snaps/shared_storage/ exists and if the shared storage is mounted at it. Change-Id: Ibb6ba6c01c227cacf9a19d1bf9264500373a4ed6 BUG: 1209112 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/10135 Reviewed-by: Aravinda VK <avishwan> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user