Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1056169

Summary: [engine-backend] [RO-disks] snapshot creation includes RO disks
Product: [Retired] oVirt Reporter: Elad <ebenahar>
Component: ovirt-engine-coreAssignee: Sergey Gotliv <sgotliv>
Status: CLOSED CURRENTRELEASE QA Contact: Nikolai Sednev <nsednev>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.4CC: acanan, acathrow, amureini, gklein, iheim, scohen, sgotliv, yeylon
Target Milestone: ---   
Target Release: 3.4.0   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: ovirt-3.4.0-beta2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-03-31 12:24:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine, vdsm and libvirt logs none

Description Elad 2014-01-21 15:56:11 UTC
Created attachment 853340 [details]
engine, vdsm and libvirt logs

Description of problem:
Snapshot creation request from engine includes the creation of volumes for read-only disks, which doesn't supposed to be requested.

Version-Release number of selected component (if applicable):
ovirt-engine-3.4.0-0.2.master.20140106180914.el6.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a VM, install OS
2. Attach a second disk to the VM as RO and activate it
3. Create a live snapshot to the VM

Actual results:
Engine request vdsm to create volumes for all the VMs disks, also for RO disks


=================

Create snapshot command passed twice to vdsm, one for each disk:


2014-01-21 14:50:43,827 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-44) [b8e22d8] START, CreateSnapshotVDSCommand( storagePoolId = ed3f6f90-39ec-4c86-
8c99-13774e215a2f, ignoreFailoverLimit = false, storageDomainId = 0707a68b-5a82-44bb-8fb7-411e5190fb3f, imageGroupId = 3947b243-4d3b-46d9-a2fd-27a3dae33b3c, imageSizeInBytes = 7516192768, volumeFormat = COW, newIm
ageId = 4139ffca-aac8-4f2a-8ea3-b75b101bcfe0, newImageDescription = , imageId = aee6bdf7-6045-43ec-809b-1aca74f3901e, sourceImageGroupId = 3947b243-4d3b-46d9-a2fd-27a3dae33b3c), log id: 56046054

2014-01-21 14:50:43,948 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] (org.ovirt.thread.pool-6-thread-44) [4d5a5208] START, CreateSnapshotVDSCommand( storagePoolId = ed3f6f90-39ec-4c86-8c99-13774e215a2f, ignoreFailoverLimit = false, storageDomainId = 0707a68b-5a82-44bb-8fb7-411e5190fb3f, imageGroupId = 7ada442b-3aca-4c2f-b913-47530f94c6e8, imageSizeInBytes = 1073741824, volumeFormat = COW, newImageId = 637af297-853c-4925-8f99-25293c553f29, newImageDescription = , imageId = 13d3b932-5155-4f24-aa63-1f9c1798acd4, sourceImageGroupId = 7ada442b-3aca-4c2f-b913-47530f94c6e8), log id: 1a6cfb1b


=================
VM nfs1-2 has 2 disks. One of them is RO.

Image 7ada442b-3aca-4c2f-b913-47530f94c6e8 is RO for the qemu process.

In vmCreate XML request from vdsm.log:

Thread-500650::DEBUG::2014-01-21 17:25:10,958::BindingXMLRPC::969::vds::(wrapper) return vmCreate


Image 3947b243-4d3b-46d9-a2fd-27a3dae33b3c is RW:

'deviceId': '3054233a-da55-4f19-b980-fd121ff65666', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'cow', 'bootOrder': '1', 'poolID': 'ed3f6f90-39ec-4c86-8c99-13774e215a2f', 'volumeID': '3f95a120-0d2e-4ca4-b5ad-8f7ae1d81b18', 'imageID': '3947b243-4d3b-46d9-a2fd-27a3dae33b3c', 'specParams': {}, 'readonly': 'false',

Image 7ada442b-3aca-4c2f-b913-47530f94c6e8 is RO:

 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}, {'iface': 'virtio', 'format': 'cow', 'type': 'disk', 'poolID': 'ed3f6f90-39ec-4c86-8c99-13774e215a2f', 'volumeID': '9ea7997e-69d8-48eb-8a92-0e5dc232e21b', 'imageID': '7ada442b-3aca-4c2f-b913-47530f94c6e8', 'specParams': {}, 'readonly': 'true', 'domainID': '0707a68b-5a82-44bb-8fb7-411e5190fb3f', 'deviceId': '7ada442b-3aca-4c2f-b913-47530f94c6e8',

Expected results:
There shouldn't be a CreateSnapshot request from engine to vdsm for RO disks. RO disks shouldn't be included in a snapshot.


Additional info: engine, vdsm and libvirt logs

Comment 1 Itamar Heim 2014-01-26 08:10:50 UTC
Setting target release to current version for consideration and review. please
do not push non-RFE bugs to an undefined target release to make sure bugs are
reviewed for relevancy, fix, closure, etc.

Comment 2 Sergey Gotliv 2014-02-04 10:33:22 UTC
*** Bug 1050838 has been marked as a duplicate of this bug. ***

Comment 3 Sandro Bonazzola 2014-02-07 11:20:06 UTC
Fixes should be in ovirt-3.4.0-beta2. Assignee please check.

Comment 4 Elad 2014-02-17 13:23:53 UTC
Sean,

I need a clarification here.
What is the expected behaviour of snapshot creation to VM which has RO disk attached? should the RO disk be included in a snapshot?  
This bug fix will prevent the creation of a leaf volume for RO disk. Is that the expected behaviour?

Comment 5 Allon Mureinik 2014-03-05 10:00:50 UTC
(In reply to Elad from comment #4)
> Sean,
> 
> I need a clarification here.
> What is the expected behaviour of snapshot creation to VM which has RO disk
> attached? should the RO disk be included in a snapshot?  
> This bug fix will prevent the creation of a leaf volume for RO disk. Is that
> the expected behaviour?
ALL the disks should be snapshoted, including the R/O disk. It should, however, remain read-only for that VM.

Comment 6 Nikolai Sednev 2014-03-05 13:55:50 UTC
Verified on two VMs, XP and RHEL, and no problems found, closing.

Comment 7 Nikolai Sednev 2014-03-05 14:00:21 UTC
vdsm-4.14.3-0.el6.x86_64
ovirt-engine-3.4.0-0.7.beta2.el6.noarch

I did as follows:
1. Create a VM, install OS
2. Attach a second disk to the VM as RO and activate it
3. Create a live snapshot to the VM
4. Previewed and committed snapshot.
5. Tried to write with dd to RO disk and failed.

Comment 8 Sandro Bonazzola 2014-03-31 12:24:23 UTC
this is an automated message: moving to Closed CURRENT RELEASE since oVirt 3.4.0 has been released