Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1625150 - [downstream clone - 4.2.6] Creating transient disk during backup operation is failing with error "No such file or directory"
[downstream clone - 4.2.6] Creating transient disk during backup operation is...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
4.2.4
All Linux
unspecified Severity medium
: ovirt-4.2.6
: ---
Assigned To: Eyal Shenitzky
Evelina Shames
: ZStream
Depends On: 1609011
Blocks:
  Show dependency treegraph
 
Reported: 2018-09-04 05:04 EDT by RHV Bugzilla Automation and Verification Bot
Modified: 2018-09-04 09:42 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1609011
Environment:
Last Closed: 2018-09-04 09:41:47 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 93362 master MERGED core: prevent from attaching a snapshot with invalid status to a VM 2018-09-04 05:05 EDT
oVirt gerrit 93407 ovirt-engine-4.2 MERGED core: prevent from attaching a snapshot with invalid status to a VM 2018-09-04 05:05 EDT
Red Hat Product Errata RHBA-2018:2623 None None None 2018-09-04 09:42 EDT

  None (edit)
Description RHV Bugzilla Automation and Verification Bot 2018-09-04 05:04:43 EDT
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1609011 +++
======================================================================

Description of problem:

While taking backup using Commvault (uses backup RHV api) on a VM which was down, it was observed that the creation of transient disk failed with error "No such file or directory\nCould not open backing image to determine size."

Currently attaching the snapshot disk will work even if the creation of the same snapshot is in progress. So if the "hotplugDisk" come in between the "teardownImage" process, the lv of the VM will be deactivated as a part of the teardown process. So while the time the SPM is creating the transient disk using "qemu-img create", the parent volume will be already deactivated as a part of teardown process of snapshot operation. So "qemu-img create" will fail with error "No such file or directory\nCould not open backing image to determine size."  

Version-Release number of selected component (if applicable):

rhvm-4.2.4.5-0.1.el7_3.noarch
vdsm-4.20.32-1.el7ev.x86_64

How reproducible:

Observed in the customer environment

Steps to Reproduce:

The customer is using Commvault backup to backup the VMs and issue was observed in the customer environment. However, I tried it manually using api scripts and it was not able to create this race condition.


Actual results:

The hotplugDisk operation of the snapshot is not working in some rare race condition.

Expected results:



Additional info:

(Originally by Nijin Ashok)
Comment 5 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:08 EDT
And what is the expectation here?

(Originally by michal.skrivanek)
Comment 6 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:12 EDT
(In reply to Michal Skrivanek from comment #3)
> And what is the expectation here?

Currently, the API allows attaching a snapshot image to another VM when the creation of the same snapshot is in progress. Do we have to prevent this?

Also, it looks like the "disk attachment" to the VM is initiated from Commvault before confirming if the snapshot creation is completed. I think this has to be changed from the Commvault end as well. I will ask the customer to raise a request to Commvault as well.

(Originally by Nijin Ashok)
Comment 8 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:21 EDT
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops@redhat.com

(Originally by rhv-bugzilla-bot)
Comment 9 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:25 EDT
Eyal, can you please provide a scenario to test this?

(Originally by Evelina Shames)
Comment 10 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:29 EDT
Steps to reproduce:

1) Create a VM with 10 GiB disk [VM1]
2) Create another VM without disks [BACKUP_VM]
2) Create a snapshot to VM1
3) While the snapshot is created (and the VM is still locked) try to attach the 
   snapshot to BACKUP_VM using the REST-API (backup-api) 

The expected result after the fix:
The engine shouldn't allow attaching the snapshot to another VM unless the snapshot status is 'OK'

(Originally by Eyal Shenitzky)
Comment 11 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:33 EDT
Verified.

Versions:
Engine - 4.2.6.3-1
vdsm - 4.20.37-3

(Originally by Evelina Shames)
Comment 12 RHV Bugzilla Automation and Verification Bot 2018-09-04 05:05:38 EDT
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{'rhevm-4.2.z': '?'}', ]

For more info please contact: rhv-devops@redhat.com

(Originally by rhv-bugzilla-bot)
Comment 13 Dusan Fodor 2018-09-04 06:28:30 EDT
moving to verified as per clone original
Comment 15 errata-xmlrpc 2018-09-04 09:41:47 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2623

Note You need to log in before you can comment on or make changes to this bug.