Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1245924 - [Snapshot] Scheduler should check vol-name exists or not before adding scheduled jobs
[Snapshot] Scheduler should check vol-name exists or not before adding sched...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: RHGS 3.1.1
Assigned To: Avra Sengupta
Anoop
SNAPSHOT
: Triaged, ZStream
Depends On: 1213349
Blocks: qe_tracker_everglades 1245923 1251815
  Show dependency treegraph
 
Reported: 2015-07-23 02:49 EDT by Avra Sengupta
Modified: 2016-09-17 08:57 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.1-13
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1213349
Environment:
Last Closed: 2015-10-05 03:21:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1845 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.1 update 2015-10-05 07:06:22 EDT

  None (edit)
Description Avra Sengupta 2015-07-23 02:49:24 EDT
+++ This bug was initially created as a clone of Bug #1213349 +++

Description of problem:

When adding jobs to scheduler,  it should check whether volume with that volume name exist or not. If volume does not exists, it through message volume volume doesn't exists.   

Version-Release number of selected component (if applicable):

[root@localhost core]# rpm -qa | grep glusterfs
glusterfs-api-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-geo-replication-3.8dev-0.12.gitaa87c31.el6.x86_64
samba-glusterfs-3.6.509-169.4.el6rhs.x86_64
glusterfs-cli-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-fuse-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-server-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-rdma-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.8dev-0.12.gitaa87c31.el6.x86_64
glusterfs-3.8dev-0.12.gitaa87c31.el6.x86_64


How reproducible:

100%

Steps to Reproduce:
1. Create 6*2 distributed replicate volume
2. create shared storage and do fuse mount on each storage node on path /var/run/gluster/shared_storage
mount -t glusterfs 10.70.47.143:meta /var/run/gluster/shared_storage/
3. initialize scheduler on each storage node e.g run snap_scheduler.py init  command
4. Enable scheduler on storage nodes e.g run snap_scheduler.py enable
5. Add jobs to scheduler providing volume name which doesn't exists


Actual results:

scheduler accepts the jobs.

Expected results:

Scheduler should check whether volume exists or not.

Additional info:
Comment 2 Avra Sengupta 2015-08-05 04:30:16 EDT
Patch sent upstream at http://review.gluster.org/#/c/11830/
Comment 5 Avra Sengupta 2015-08-20 02:02:18 EDT
Upstream patch at http://review.gluster.org/#/c/11830/
Comment 7 Anil Shah 2015-08-27 05:26:29 EDT
[root@darkknight yum.repos.d]# gluster v info | grep Name
Volume Name: ecvol
Volume Name: gluster_shared_storage
Volume Name: testvol

[root@darkknight yum.repos.d]# snap_scheduler.py add job2  " *  * * * * " vol0
snap_scheduler: Volume vol0 does not exist. Create vol0 and retry.
 
[root@darkknightrises yum.repos.d]# snap_scheduler.py edit job1 " *  * * * * " vol0
snap_scheduler: Volume vol0 does not exist. Create vol0 and retry.

Bug verified on build glusterfs-3.7.1-13.el7rhgs.x86_64
Comment 10 errata-xmlrpc 2015-10-05 03:21:06 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html

Note You need to log in before you can comment on or make changes to this bug.