Bug 1609011

Summary: Creating transient disk during backup operation is failing with error "No such file or directory"
Product: Red Hat Enterprise Virtualization Manager Reporter: nijin ashok <nashok>
Component: ovirt-engineAssignee: Eyal Shenitzky <eshenitz>
Status: CLOSED ERRATA QA Contact: Evelina Shames <eshames>
Severity: medium Docs Contact:
Priority: high    
Version: 4.2.4CC: ebenahar, eshenitz, lsurette, michal.skrivanek, mtessun, Rhev-m-bugs, srevivo, tburke, tnisan
Target Milestone: ovirt-4.3.0Keywords: ZStream
Target Release: 4.3.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.3.0_alpha Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1625150 (view as bug list) Environment:
Last Closed: 2019-05-08 12:38:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1625150    

Description nijin ashok 2018-07-26 18:01:33 UTC
Description of problem:

While taking backup using Commvault (uses backup RHV api) on a VM which was down, it was observed that the creation of transient disk failed with error "No such file or directory\nCould not open backing image to determine size."

Currently attaching the snapshot disk will work even if the creation of the same snapshot is in progress. So if the "hotplugDisk" come in between the "teardownImage" process, the lv of the VM will be deactivated as a part of the teardown process. So while the time the SPM is creating the transient disk using "qemu-img create", the parent volume will be already deactivated as a part of teardown process of snapshot operation. So "qemu-img create" will fail with error "No such file or directory\nCould not open backing image to determine size."  

Version-Release number of selected component (if applicable):

rhvm-4.2.4.5-0.1.el7_3.noarch
vdsm-4.20.32-1.el7ev.x86_64

How reproducible:

Observed in the customer environment

Steps to Reproduce:

The customer is using Commvault backup to backup the VMs and issue was observed in the customer environment. However, I tried it manually using api scripts and it was not able to create this race condition.


Actual results:

The hotplugDisk operation of the snapshot is not working in some rare race condition.

Expected results:



Additional info:

Comment 3 Michal Skrivanek 2018-07-27 05:38:59 UTC
And what is the expectation here?

Comment 4 nijin ashok 2018-07-27 09:13:01 UTC
(In reply to Michal Skrivanek from comment #3)
> And what is the expectation here?

Currently, the API allows attaching a snapshot image to another VM when the creation of the same snapshot is in progress. Do we have to prevent this?

Also, it looks like the "disk attachment" to the VM is initiated from Commvault before confirming if the snapshot creation is completed. I think this has to be changed from the Commvault end as well. I will ask the customer to raise a request to Commvault as well.

Comment 6 RHV bug bot 2018-08-06 12:02:30 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops

Comment 7 Evelina Shames 2018-08-21 15:00:19 UTC
Eyal, can you please provide a scenario to test this?

Comment 8 Eyal Shenitzky 2018-08-22 04:40:05 UTC
Steps to reproduce:

1) Create a VM with 10 GiB disk [VM1]
2) Create another VM without disks [BACKUP_VM]
2) Create a snapshot to VM1
3) While the snapshot is created (and the VM is still locked) try to attach the 
   snapshot to BACKUP_VM using the REST-API (backup-api) 

The expected result after the fix:
The engine shouldn't allow attaching the snapshot to another VM unless the snapshot status is 'OK'

Comment 9 Evelina Shames 2018-08-22 11:56:34 UTC
Verified.

Versions:
Engine - 4.2.6.3-1
vdsm - 4.20.37-3

Comment 10 RHV bug bot 2018-08-28 18:32:56 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops

Comment 12 RHV bug bot 2018-12-10 15:13:09 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 13 RHV bug bot 2019-01-15 23:35:38 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 15 errata-xmlrpc 2019-05-08 12:38:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085