Bug 1223677

Summary: [RHEV-RHGS] After self-heal operation, VM Image file loses the sparseness property
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: replicateAssignee: Anuradha <atalur>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.1CC: nsathyan, ravishankar, rcyriac, rhs-bugs, smohan, storage-qa-internal, vagarwal
Target Milestone: ---Keywords: TestBlocker
Target Release: RHGS 3.1.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1232238 (view as bug list) Environment:
RHEV 3.5.3 RHGS 3.1 RHEL 6.6 as Hypervisor
Last Closed: 2015-07-29 04:44:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1202842, 1223636, 1232238, 1235966    

Description SATHEESARAN 2015-05-21 08:20:31 UTC
Description of problem:
-----------------------
RHEV data domain was backed by replica 3 gluster volume and one of the node was down, while creating a image file. 

After self-heal, it was observed that the sparseness property on the image file on the healed NODE was no longer observed

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHGS 3.1 Nightly build ( glusterfs-3.7.0-2.el6rhs )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
0. Create a 3 node trusted storage pool ( gluster cluster )
1. Create a replica 3 volume
2. Optimize the volume for virt-store usecase
3. Start the volume
4. Use this volume as a RHEV data domain
5. Interrupt the traffic between hypervisors and one of the node in Trusted Storage Pool ( gluster cluster ) [ used iptables for this step ]
6. Create a new VM from RHEV and install RHEL 6.7 on that application VM
7. Restore the network between Hypervisor & NODE in gluster cluster
8. Initiate self-heal
9. Look for the actual size of the file on all the nodes

Actual results:
---------------
The VM Image file size of the node, on which the heal operation has completed has blown up to full size ( losing its sparseness )

Expected results:
-----------------
VM file should continue to be a sparse file even after self-heal

Comment 5 SATHEESARAN 2015-06-12 01:46:16 UTC
This is a serious issue with VM usecase, where the expectation is to create a sparse image file,but self-heal breaks that, leading the image to occupy the full size. This would lead to admin complaining about wasted disk space.

I consider this issue as a blocker for RHGS-3.1

Comment 7 Anuradha 2015-06-26 09:19:34 UTC
Patch posted for review on  :
https://code.engineering.redhat.com/gerrit/51673

Upstream URLs:
1) master : http://review.gluster.org/11252/
2) 3.7    : http://review.gluster.org/11423/

Comment 11 SATHEESARAN 2015-07-05 07:25:31 UTC
Verified with RHGS 3.1 Nighly build ( glusterfs-3.7.1-7.el6rhs ) with the test steps as mentioned in comment0

The size of the image file on all the nodes ( bricks ) is not increased and sparseness property of image file is retained.

Marking this bug as VERIFIED

Comment 12 errata-xmlrpc 2015-07-29 04:44:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html