Bug 1103680 - [SNAPSHOT]: Snapshot create deletes all the snapshots until it reaches the soft limit after the system config is changed
Summary: [SNAPSHOT]: Snapshot create deletes all the snapshots until it reaches the so...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Raghavendra Bhat
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On: 1103665
Blocks: 1083917
TreeView+ depends on / blocked
 
Reported: 2014-06-02 11:04 UTC by Rahul Hinduja
Modified: 2016-09-17 12:52 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.6.0.17-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:40:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-06-02 11:04:08 UTC
Description of problem:
=======================

In a scenario where snapshot creations are in progress and the system limit is changed to much lesser number than already created snaps, the snap create start deleting all the snaps between snap-max-soft-limit and currently taken snaps.

Ideally the snap create should always delete only one snap when the snap soft limit is reached.


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.6.0.11-1.el6rhs.x86_64

How reproducible:
=================

1/1


Steps to Reproduce:
===================
1. Create 4 volumes from 4 node cluster
2. Mount the volume (Fuse and NFS), start heavy IO
3. Start the creation of snaps(256 snaps for each volume) in a loop.
4. Once the snaps are created to 200 snaps execute step5
5. While snap creation is in progress, try to set the config values for system to 100. It is successful and reflects for all the volumes.
6. Keep monitoring the snap create it might fail once it crosses the cli window of 2 min because deletes are triggered once the config value is set.

Actual results:
===============

1. Snap create failed as its timed out.
2. It deleted all the snaps until it reached snap-max-soft-limit i.e(90)


Expected results:
=================

1. Snap creation should fail because the system has reached effective max hard limit for the volume.
2. Only one snap should be deleted as part of the creation the number of snaps should still be shown as 200+

Comment 3 Raghavendra Bhat 2014-06-10 06:43:44 UTC
https://code.engineering.redhat.com/gerrit/#/c/26590/ has been sent for review.

Comment 4 Rahul Hinduja 2014-06-12 11:05:19 UTC
With the option auto.delete enable/disable, this bug becomes a valid usual case instead of corner case. Editing the title.

Comment 7 Raghavendra Bhat 2014-06-13 11:23:17 UTC
https://code.engineering.redhat.com/gerrit/#/c/26590/ has been merged.

Comment 8 Rahul Hinduja 2014-06-17 13:03:19 UTC
Verified with build: glusterfs-3.6.0.18-1.el6rhs.x86_64

Following are the observations:

1. While snapshot creation is in progress and auto-delete is defalut(disabled) and snap-max-soft-limit is brought down lesser to already created snaps than the snapshot creation is successful with warning message.

2. While snapshot creation is in progress and auto-delete is enabled(disabled) and snap-max-soft-limit is brought down lesser to already created snaps than the snapshot creation is successful without warning message but deletes the one oldest snap.

3. While snapshot creation is in progress and auto-delete is defalut(disabled) and snap-max-hard-limit is brought down lesser to already created snaps than the snapshot creation fails with proper message.

Moving the bug to verified.

Comment 10 errata-xmlrpc 2014-09-22 19:40:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.