Bug 1704782 - [webadmin] -ovirt 4.3.3 doesn't allow creation of VM with "Thin Provision"-ed disk (always preallocated)
Summary: [webadmin] -ovirt 4.3.3 doesn't allow creation of VM with "Thin Provision"-ed...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Frontend.WebAdmin
Version: 4.3.3
Hardware: x86_64
OS: Linux
high
high with 1 vote vote
Target Milestone: ovirt-4.3.4
: 4.3.4.1
Assignee: Eyal Shenitzky
QA Contact: Avihai
URL:
Whiteboard:
: 1728019 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-30 13:48 UTC by Strahil Nikolov
Modified: 2019-07-08 20:15 UTC (History)
9 users (show)

Fixed In Version: ovirt-engine-4.3.4.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-11 06:25:43 UTC
oVirt Team: Storage
pm-rhel: ovirt-4.3+
pm-rhel: blocker?


Attachments (Terms of Use)
video_capture_engine_vdsm logs_4.3.3.3-0.1.el7 (3.31 MB, application/gzip)
2019-05-06 05:55 UTC, Avihai
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 100063 master MERGED webadmin: set Gluster disk default volume type to preallocated 2020-02-27 08:55:02 UTC
oVirt gerrit 100180 ovirt-engine-4.3 MERGED webadmin: set Gluster disk default volume type to preallocated 2020-02-27 08:55:02 UTC

Description Strahil Nikolov 2019-04-30 13:48:23 UTC
Description of problem:
During creation of a new VM, creation of disk is set to "Preallocated". Changing that to "Thin Provision" doesn't work (selects , but at the end the disk is fully preallocated).


Version-Release number of selected component (if applicable):
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-vm-infra-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-api-explorer-0.0.4-1.el7.noarch
ovirt-engine-backend-4.3.3.6-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.3.3.6-1.el7.noarch
ovirt-engine-dwh-4.3.0-1.el7.noarch
ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.9-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.3.6-1.el7.noarch
ovirt-engine-metrics-1.3.0.2-1.el7.noarch
ovirt-engine-restapi-4.3.3.6-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.3.3.6-1.el7.noarch
ovirt-engine-setup-base-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-tools-4.3.3.6-1.el7.noarch
ovirt-engine-tools-backup-4.3.3.6-1.el7.noarch
ovirt-engine-ui-extensions-1.0.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.3.6-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-wildfly-15.0.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-guest-tools-iso-4.3-2.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-deploy-java-1.8.0-1.el7.noarch
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-imageio-proxy-1.5.1-0.el7.noarch
ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
ovirt-iso-uploader-4.3.1-1.el7.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-provider-ovn-1.2.20-1.el7.noarch
ovirt-release43-4.3.3.1-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-web-ui-1.5.2-1.el7.noarch
python2-ovirt-engine-lib-4.3.3.6-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64


How reproducible:
Tried 2 times - always happens.

Steps to Reproduce:
1.Create a gluster domain
2.Create a new VM
3.During creation of new VM select "General"  -> "Instance Images" -> "Create"
4. Select "Wipe after delete" , set size of "20" and "Allocation Policy" -> "Thin Provision" -> "OK"
5. Select Cluster (my case Default) and click "OK" to complete VM creation

Actual results:
Disk is fully allocated (both in UI and on disk):

[root@ovirt1 images]# qemu-img info /rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/
cd0018d3-05cd-4667-a5f8-b26dca65a680/ __DIRECT_IO_TEST__
[root@ovirt1 images]# qemu-img info /rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
image: /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G

[root@ovirt1 images]# qemu-img info /rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
image: /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G


Expected results:
Disk to be thinly provisioned and not take all the space .

Additional info:
I have tried both ovirt 4.3.3.5-1.el7 and 4.3.3.6-1.el7

Comment 1 Strahil Nikolov 2019-04-30 15:40:34 UTC
Just some info.
When the disk is being created , but the new VM wizzard is still active (so no new VM yet), once you click Edit on the disk - the disk is marked as 'Preallocated' and the drop down is grayed out.

Comment 2 Strahil Nikolov 2019-04-30 21:00:59 UTC
It seems that my previous comment is not completely true - when pressed Edit -> the disk is again preallocated , but the dropdown allows changing. Clicking "OK" and again into "Edit" shows that the disk is again preallocated.

Comment 3 Avihai 2019-05-06 05:46:03 UTC
Some more info on this issue:

1) The issue has a simple reproduction by creating via webadmin a thin glusterfs SD floating disk => preallocated this is created.
2) This does not occur in ovirt-engine 4.2.8.6-0.1.el7ev thus regression.
3) Issue occur only on glusterfs , nfs works fine.
4) The default allocation policy for glusterfs is 'preallocated'(it was thin in 4.2 and it's thin for an NFS SD)
4) Attached a video capture showing the issue.
5) Work around -> Via RESTAPI creation of a glusterfs disk works fine .

RESTAPI used, gluster storage domain id = "db1acaed-4380-4501-a951-537bf6827e51" :

method:
POST

Body:
<disk>
  <storage_domains>
    <storage_domain id="db1acaed-4380-4501-a951-537bf6827e51"/>
  </storage_domains>
  <name>restGlusterthin</name>
  <provisioned_size>1073741824</provisioned_size>
  <format>cow</format>
</disk>

Comment 4 Avihai 2019-05-06 05:50:33 UTC
From Engine we see the disk creation is with volumeFormat='RAW':

2019-05-06 08:44:06,449+03 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-155) [794f5027-8382-48f4-ab47-ec3493df6df2] START, CreateImageVDSCommand( CreateImageVDSCommandPa
rameters:{storagePoolId='126908b3-4afb-4492-acdf-5dad110cae40', ignoreFailoverLimit='false', storageDomainId='db1acaed-4380-4501-a951-537bf6827e51', imageGroupId='38a919dc-7422-4954-9897-ec1bc59955d5', imageSiz
eInBytes='1073741824', volumeFormat='RAW', newImageId='36badbbf-5b55-49d7-8d89-89c99561c45f', imageType='Preallocated', newImageDescription='{"DiskAlias":"gluster_thin_disk3","DiskDescription":""}', imageInitia
lSizeInBytes='0'}), log id: a49748b

Comment 5 Avihai 2019-05-06 05:55:06 UTC
Created attachment 1564224 [details]
video_capture_engine_vdsm logs_4.3.3.3-0.1.el7

Comment 6 Strahil Nikolov 2019-05-13 12:42:33 UTC
Should I expect a fix in 4.3.4 ?

Comment 7 RHEL Program Management 2019-05-13 14:15:03 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 9 Avihai 2019-05-26 10:30:28 UTC
Verified at ovirt-engine 4.3.4.1-0.1.el7.

Details:
Creating a thin-provisioned disk via webadmin -> disk is created as thin-provisioned as expected.
Tested gluster/iscsi/NFS storage flavors.
Tested also that preallocated allocation policy disks are created as expected.

Engine log output shows imageType='Sparse' as expected checked via also via UI/REST get commands for the created disk.

2019-05-26 13:12:36,474+03 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-4) [4709636f-fb8d-4fb6-b986-fe55ebe6182b] START, CreateImageVDSCommand( CreateImageVDSCommandPara
meters:{storagePoolId='6138aa72-4b02-4706-ae6a-30209d52402e', ignoreFailoverLimit='false', storageDomainId='97e00877-9224-4a21-be41-64b9856a2a14', imageGroupId='9c01bb5f-bec3-416c-acf1-8edf5394311e', imageSizeI
nBytes='1073741824', volumeFormat='RAW', newImageId='32f7e210-f002-4deb-a2ed-e96f7c69910d', imageType='Sparse', newImageDescription='{"DiskAlias":"vm1_Disk1","DiskDescription":""}', imageInitialSizeInBytes='0'}
), log id: 201d6ffe

Comment 10 Strahil Nikolov 2019-05-27 07:52:37 UTC
Just tested with ovirt 4.3.4 RC2 and the default policy is still Preallocated.
Yet , when selecting "Thin Provision" - it works :

[root@ovirt1 2fc0f871-34c4-44ff-821f-ca7a93f17d22]# qemu-img info 1df3ab4b-b58b-4de3-8b4e-813084002c55
image: 1df3ab4b-b58b-4de3-8b4e-813084002c55
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 779M

Comment 11 Strahil Nikolov 2019-05-28 14:12:20 UTC
Should we expect that the default policy is returned to "Thin Provision" ?

Comment 12 Strahil Nikolov 2019-05-30 13:24:39 UTC
Just installed oVirt 4.3.4 RC3 on the engine and the default policy is still "Preallocated".

Comment 13 Avihai 2019-05-30 14:16:18 UTC
(In reply to Strahil Nikolov from comment #11)
> Should we expect that the default policy is returned to "Thin Provision" ?

AFAIK, the fix only handled only the "doesn't allow creation of VM with Thin Provision" and left the default policy of gluster disk to be "Preallocated".

See https://gerrit.ovirt.org/#/c/100180 commit message :

"

The fix caused setting the volume format of each Gluster based disk
to preallocated even if the user requested for sparse volume.

This patch reverts this behavior and set only the default value
of a Gluster based storage domain disk to be preallocated and allow to
the user to select different volume format.
"

Eyal, as the one who fixed this issue, what is your take on this?

Comment 14 Strahil Nikolov 2019-05-30 15:20:08 UTC
As the bug is a regression, shouldn't all issues to be fixed in order to claim it fixed?
If not, please update me so I can open a new bug for restoring the default policy (so it can be consistent with other file-based storages' policies).

Comment 15 Alex McWhirter 2019-06-02 04:25:41 UTC
The default was changed due to bug 1644159


However, i can't reproduce this performance disparity between thin and preallocated disks. I get the same speeds (roughly 900MB/s on 10GB networking), on both disk types. Should bug 1644159 be re-evaluated?

Comment 16 Strahil Nikolov 2019-06-02 06:07:56 UTC
Hm... despite the fact that my setup is 'exotic' (using gluster v6.1 from CentOS repos), I also noticed that "Preallocates" perform better (but I never calculated the performance diff).

What is your test setup's gluster options ? The only difference I have noticed recently is that read.local is explicitly set to 'no' (virt gluster group) and had issues with direct I/O not enabled.

Maybe 1644159 should be reevaluated , but gluster options should be checked. Maybe something else in the defaults has been changed.

Comment 17 Alex McWhirter 2019-06-02 06:28:13 UTC
I'm using gluster 5 from ovirt repo.

I have changed some options, as many of the defaults for ovirt are based on old gluster bugs that don't exist anymore, they probably should be re-evaluated.

features.barrier: disable
cluster.choose-local: off
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
user.cifs: off
auth.allow: *
performance.quick-read: off
performance.read-ahead: on
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
network.ping-timeout: 30
performance.io-thread-count: 32
client.event-threads: 4
server.event-threads: 8
performance.stat-prefetch: on
performance.flush-behind: on
performance.write-behind-window-size: 64MB
auto-delete: enable

Comment 18 Eyal Shenitzky 2019-06-02 06:39:25 UTC
(In reply to Alex McWhirter from comment #17)
> I'm using gluster 5 from ovirt repo.
> 
> I have changed some options, as many of the defaults for ovirt are based on
> old gluster bugs that don't exist anymore, they probably should be
> re-evaluated.
> 
> features.barrier: disable
> cluster.choose-local: off
> performance.client-io-threads: on
> nfs.disable: on
> transport.address-family: inet
> user.cifs: off
> auth.allow: *
> performance.quick-read: off
> performance.read-ahead: on
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> network.ping-timeout: 30
> performance.io-thread-count: 32
> client.event-threads: 4
> server.event-threads: 8
> performance.stat-prefetch: on
> performance.flush-behind: on
> performance.write-behind-window-size: 64MB
> auto-delete: enable

I think that the discussion should continue under bug 1644159.
It is not relevant for this bug.

Comment 19 Strahil Nikolov 2019-06-02 07:41:29 UTC
I agree. This one is OK while bug 1644159 should be reevaluated.

Comment 20 Sandro Bonazzola 2019-06-11 06:25:43 UTC
This bugzilla is included in oVirt 4.3.4 release, published on June 11th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Comment 21 Darrell 2019-07-08 20:15:33 UTC
*** Bug 1728019 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.