Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1096526

Summary: preallocated disk reported by vdsm according to actual amount of data on disk and not full preallocated size after moving it from storage type iscsi to nfs
Product: Red Hat Enterprise Virtualization Manager Reporter: Raz Tamir <ratamir>
Component: vdsmAssignee: Tal Nisan <tnisan>
Status: CLOSED CANTFIX QA Contact: Aharon Canan <acanan>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4.0CC: amureini, bazulay, fsimonce, gklein, iheim, lpeer, yeylon
Target Milestone: ---   
Target Release: 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-05-16 07:56:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm and engine logs none

Description Raz Tamir 2014-05-11 16:11:15 UTC
Created attachment 894449 [details]
vdsm and engine logs

Description of problem:
When moving preallocated disk from storage domain type iscsi to nfs, the disk size appear to be 0.
VDSM report:

vdsClient -s 0 getVolumeInfo e43d8d16-09ec-46e8-a41d-158aa218c91b `vdsClient -s 0 getConnectedStoragePoolsList` c32c12f8-efa5-4f81-b09c-9f1e-c30f0e5dc030
        status = OK
        domain = e43d8d16-09ec-46e8-a41d-158aa218c91b
        capacity = 5368709120
        voltype = LEAF
        description =
        parent = 00000000-0000-0000-0000-000000000000
        format = RAW
        image = c32c12f8-efa5-4f81-b09c-3d2d8aa4a1db
        uuid = 0ad3c94e-7c75-4545-9f1e-c30f0e5dc030
        disktype = 2
        legality = LEGAL
        mtime = 1399817735
        apparentsize = 5368709120
        truesize = 0  <----------???
        type = PREALLOCATED
        children = []
        pool =
        ctime = 1399817735

Even though the file size that represent the disk after the movement (destination is file type storage domain) is correct (5G in this example):

[root@green-vdsc c32c12f8-efa5-4f81-b09c-3d2d8aa4a1db]# ls -ltrh
total 1.1M
-rw-rw----. 1 vdsm kvm 1.0M May 11  2014 0ad3c94e-7c75-4545-9f1e-c30f0e5dc030.lease
-rw-r--r--. 1 vdsm kvm  274 May 11  2014 0ad3c94e-7c75-4545-9f1e-c30f0e5dc030.meta
-rw-rw----. 1 vdsm kvm 5.0G May 11  2014 0ad3c94e-7c75-4545-9f1e-c30f0e5dc030



Version-Release number of selected component (if applicable):
vdsm-4.14.7-0.2.rc.el6ev.x86_64
rhevm-3.4.0-0.16.rc.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Move preallocated disk from iscsi domain to nfs
2. Check vdsClient and also look at webadmin disk actual size (will change to: > 1 GB)
3.

Actual results:
disk size reported as equal to 0

Expected results:
Move disk shouldn't affect disk size

Additional info:

Comment 1 Raz Tamir 2014-05-12 08:14:23 UTC
After few tests I realized that the size reported by vdsm is the actual amount of data on disk and not the pre allocated space like it should be

Comment 2 Federico Simoncelli 2014-05-14 22:12:36 UTC
This issue is irrelevant if you consider the description of bug 1097843

Especially the part:

- some appliances discard zero writes (which means that even if we spend time and bandwidth with dd trying to write zeroes, they get discarded and the file will remain sparse. This happens on netapp and nexenta, not sure what specific models/versions, but it's irrelevant at this point)

Comment 3 Allon Mureinik 2014-05-16 07:56:02 UTC
Note that the apparent size is correct. As bug 1097843 explains, we have no real way of controlling the actual size on file domains, and note that the apparent size is fine.
Closing.