Bug 1124469 - Faulty File storage allocation when creating vm from snapshot
Summary: Faulty File storage allocation when creating vm from snapshot
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-core
Version: 3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Vered Volansky
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks: 960934
TreeView+ depends on / blocked
 
Reported: 2014-07-29 14:39 UTC by Ori Gofen
Modified: 2016-02-10 17:32 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-04 12:55:28 UTC
oVirt Team: Storage
Embargoed:


Attachments (Terms of Use)
vdsm+engine logs and image (429.15 KB, application/gzip)
2014-08-03 09:10 UTC, Ori Gofen
no flags Details
vdsm+engine logs (705.30 KB, application/gzip)
2014-08-03 09:57 UTC, Ori Gofen
no flags Details

Description Ori Gofen 2014-07-29 14:39:37 UTC
Description of problem:
After operation of cloning vm from snapshot 

root@camel-vdsb /rhev/data-center
 # vdsClient -s 0 getVolumeInfo $sduuid $spuuid $imuuid $voluuid
        status = OK
        domain = 7b7b2e0b-3814-4203-8bf3-bfacda5c604e
        capacity = 3221225472
        voltype = LEAF
        description = 
        parent = 00000000-0000-0000-0000-000000000000
        format = RAW
        image = b47145d5-d7c2-460f-be52-2b4ca3263ae7
        uuid = a08edb24-9502-4a87-8fd1-dd63db824d47
        disktype = 2
        legality = LEGAL
        mtime = 0
        apparentsize = 3221225472
        truesize = 24576
        type = PREALLOCATED
        children = []
        pool = 
        ctime = 1406642576

 # du -ch a08edb24-9502-4a87-8fd1-dd63db824d47
24K     a08edb24-9502-4a87-8fd1-dd63db824d47
24K     total

Version-Release number of selected component (if applicable):
beta.2

How reproducible:
100%

Steps to Reproduce:
1.clone vm from snapshot

Actual results:


Expected results:


Additional info:

Comment 1 Allon Mureinik 2014-07-29 17:56:17 UTC
Ori, I have no idea what this bug is about. All I can see here is a getVolumeInfo call that seems to correspond to the output of du.

Can you please clarify?

Comment 2 Ori Gofen 2014-07-30 08:12:28 UTC
Sure Allon,as well described in BZ #960934 and several other bugs BZ #1053742 , BZ #1053750 , BZ #1054175 , Storage space verification of cloned disks should be done as follows:

For cloned disks:
      | File Domain                             | Block Domain
 -----|-----------------------------------------|-------------
 qcow | preallocated : 1.1 * disk capacity      |1.1 * min(used ,capacity) 
      | sparse: 1.1 * min(used ,capacity)       |
 -----|-----------------------------------------|-------------
 raw  | preallocated: disk capacity             |disk capacity
      | sparse: min(used,capacity)

In this example we can see clearly a raw preallocated disk capacity that isn't equal to the true size as it should.

I expect to see

root@camel-vdsb /rhev/data-center
 # vdsClient -s 0 getVolumeInfo $sduuid $spuuid $imuuid $voluuid
        status = OK
        domain = sb8b250b-3814-4203-8bf3-bfacda5c604e
        capacity = 3221225472
        voltype = LEAF
        description = 
        parent = 00000000-0000-0000-0000-000000000000
        format = RAW
        image = b47145d5-d7c2-460f-be52-2b4ca3263ae7
        uuid = a08edb24-9502-4a87-8fd1-dd63db824d47
        disktype = 2
        legality = LEGAL
        mtime = 0
        apparentsize = 3221225472
        truesize = 3221225472  <----------- truesize=virtualsize
        type = PREALLOCATED
        children = []
        pool = 
        ctime = 1406642576

and du should return 3.1 G

Comment 3 Allon Mureinik 2014-07-30 11:27:27 UTC
This is the "worst case scenario" verification. There's no way to force a file base volume to actually take up all this space.

Please attach engine and vdsm logs just so we can make sure nothing funky went on in the process, but this seems like a "notabug" candidate.

Comment 4 Vered Volansky 2014-07-31 08:33:59 UTC
Ori, the verification of these bugs should be that if there's enough space according to the table, the operation succeeds, and if there isn't, it fails with CDA.

Comment 5 Ori Gofen 2014-08-03 09:10:31 UTC
Created attachment 923562 [details]
vdsm+engine logs and image

Vered, thanks for the clarification.
My line of thought was that part of the problem is that vdsm can't predict the true size of the newly created disk,thus, he has problems checking for space left.

So anyway this bug deals with the inconsistency of creating a raw preallocated disks.
just to make sure we are all in the same page now,
in this example I have created two NFS preallocated disks.
first disk is reported as expected(storage's fs is ext4) by vdsClient and by UI (see image)

root@camel-vdsc> vdsClient -s 0 getVolumeInfo $sduuid $spuuid $imuuid $voluuid                                                                                               /rhev/data-center
        status = OK
        domain = c267f94f-5538-493b-8103-6c04db40e035
        capacity = 4294967296
        voltype = LEAF
        description = 
        parent = 00000000-0000-0000-0000-000000000000
        format = RAW
        image = c973cf12-e7dc-49cc-82f4-404d2ba4fa53
        uuid = 9fdb4b30-c446-4952-8ad0-567022a2e8f7
        disktype = 2
        legality = LEGAL
        mtime = 0
        apparentsize = 4294967296
        truesize = 4297072640
        type = PREALLOCATED
        children = []
        pool = 
        ctime = 1407054705

root@camel-vdsc> du -ch mnt/10.35.160.108:_RHEV_ogofen_1/c267f94f-5538-493b-8103-6c04db40e035/images/c973cf12-e7dc-49cc-82f4-404d2ba4fa53/9fdb4b30-c446-4952-8ad0-567022a2e8f7
4.1G    mnt/10.35.160.108:_RHEV_ogofen_1/c267f94f-5538-493b-8103-6c04db40e035/images/c973cf12-e7dc-49cc-82f4-404d2ba4fa53/9fdb4b30-c446-4952-8ad0-567022a2e8f7
4.1G    total

second disk created from snapshot is not reported correctly

root@camel-vdsc> vdsClient -s 0 getVolumeInfo $sduuid $spuuid $imuuid $voluuid                                                                                               /rhev/data-center
        status = OK
        domain = c267f94f-5538-493b-8103-6c04db40e035
        capacity = 4294967296
        voltype = LEAF
        description = 
        parent = 00000000-0000-0000-0000-000000000000
        format = RAW
        image = 86c2d0cb-c19d-4692-8a5c-3b4b93a940e5
        uuid = 76db8980-1950-4c08-a2e9-68c0c0deb991
        disktype = 2
        legality = LEGAL
        mtime = 0
        apparentsize = 4294967296
        truesize = 24576
        type = PREALLOCATED
        children = []
        pool = 
        ctime = 1407055358

root@camel-vdsc> du -ch mnt/10.35.160.108:_RHEV_ogofen_1/c267f94f-5538-493b-8103-6c04db40e035/images/86c2d0cb-c19d-4692-8a5c-3b4b93a940e5/76db8980-1950-4c08-a2e9-68c0c0deb991
24K     mnt/10.35.160.108:_RHEV_ogofen_1/c267f94f-5538-493b-8103-6c04db40e035/images/86c2d0cb-c19d-4692-8a5c-3b4b93a940e5/76db8980-1950-4c08-a2e9-68c0c0deb991
24K     total

and not from the UI (see image)

Comment 6 Ori Gofen 2014-08-03 09:57:07 UTC
Created attachment 923576 [details]
vdsm+engine logs

sorry,the correct logs (vdsc is the correct host)

Comment 7 Allon Mureinik 2014-08-04 12:55:28 UTC
The dd we use for allocation is performed:

9e9a9461-4613-46b8-baf3-4cadc821d54a::DEBUG::2014-08-03 11:42:38,573::utils::778::Storage.Misc.excCmd::(watchCmd) /bin/nice -n 19 /usr/bin/ionice -c 3 /bin/dd if=/dev/zero of=/rhev/data-center/00000002-0002-0002-0002-0000000002ca/c267f94f-5538-493b-8103-6c04db40e035/images/86c2d0cb-c19d-4692-8a5c-3b4b93a940e5/76db8980-1950-4c08-a2e9-68c0c0deb991 bs=1048576 seek=0 skip=0 conv=notrunc count=10 oflag=direct (cwd None)
9e9a9461-4613-46b8-baf3-4cadc821d54a::DEBUG::2014-08-03 11:42:38,709::utils::790::Storage.Misc.excCmd::(watchCmd) SUCCESS: <err> = ['10+0 records in', '10+0 records out', '10485760 bytes (10 MB) copied, 0.127961 s, 81.9 MB/s']; <rc> = 0
9e9a9461-4613-46b8-baf3-4cadc821d54a::DEBUG::2014-08-03 11:42:38,710::misc::262::Storage.Misc::(validateDDBytes) err: ['10+0 records in', '10+0 records out', '10485760 bytes (10 MB) copied, 0.127961 s, 81.9 MB/s'], size: 10485760
9e9a9461-4613-46b8-baf3-4cadc821d54a::INFO::2014-08-03 11:42:38,710::fileVolume::133::Storage.Volume::(_create) Request to create RAW volume /rhev/data-center/00000002-0002-0002-0002-0000000002ca/c267f94f-5538-493b-8103-6c04db40e035/images/86c2d0cb-c19d-4692-8a5c-3b4b93a940e5/76db8980-1950-4c08-a2e9-68c0c0deb991 with size = 20480 sectors

If the underlying FS discards the zeroes, there's nothing we can do.


Note You need to log in before you can comment on or make changes to this bug.