Bug 1218573
Summary: | [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Anil Shah <ashah> | |
Component: | snapshot | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | asengupt, bugs, gluster-bugs, rjoseph | |
Target Milestone: | --- | Keywords: | Reopened, Triaged | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | Scheduler | |||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1223205 1230399 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 12:58:24 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1223205, 1230399 |
Description
Anil Shah
2015-05-05 09:42:46 UTC
Version : glusterfs 3.7.0beta1 built on May 7 2015 ======= Another scenario where jobs are not picked up: 1) Create a dist-rep volume and mount it 2) Create a shared storage and mount it Enable Scheduler and schedule jobs on the volumes snap_scheduler.py add "A1" "*/5 * * * * " "vol1" snap_scheduler: Successfully added snapshot schedule snap_scheduler.py add "A2" "*/10 * * * * " "vol2" snap_scheduler: Successfully added snapshot schedule 3) Take a snapshot of the shared storage gluster snapshot create MV_Snap gluster_shared_storage snapshot create: success: Snap MV_Snap_GMT-2015.05.08-09.20.26 created successfully 4)Add some more jobs - A3 and A4 5)Stop the volume and see that at the next scheduled time no job is picked up. 6)Restore the shared storage to the snap taken and start the volume 7)After restoring the Scheduler lists A1 and A2 jobs, but none of them are picked up REVIEW: http://review.gluster.org/11139 (snapshot/scheduler: Reload /etc/cron.d/glusterfs_snap_cron_tasks when shared storage is available) posted (#1) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11139 (snapshot/scheduler: Reload /etc/cron.d/glusterfs_snap_cron_tasks when shared storage is available) posted (#2) for review on master by Avra Sengupta (asengupt) Moving it to assigned, as the shared storage brick is wiped clean on node reboot. This happens bcoz shared storage brick is now present at /var/run/gluster/ss_brick, which is a tmpfs REVIEW: http://review.gluster.org/11533 (glusterd/shared_storage: Use /var/lib/glusterd/ss_brick as shared stroage's brick) posted (#1) for review on master by Avra Sengupta (asengupt) REVIEW: http://review.gluster.org/11533 (glusterd/shared_storage: Use /var/lib/glusterd/ss_brick as shared storage's brick) posted (#2) for review on master by Avra Sengupta (asengupt) Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |