Description of problem: In our backup workload, we are trying to read template disks concurrently as we are taking concurrent backup of multiple VMs sharing the same template. If VMs are dependent clone, while reading template disk using IMAGE IO API, one VM locks the disk and till it is read is over, another process backing up another VM cannot access it. Version-Release number of selected component (if applicable): How reproducible: Create two VMs (VM1, VM2) from template as dependent clones. Download the all virtual disks for VM1 and start download for VM2 simultaneously. Downloading template VM disks for VM2 will fail (initiate transfer API) will fail as disk is already locked. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: We want to initiate transfer for template disks as these are read only transfer. Additional info:
@Tal - As the request is to read the same image concurrently, to allow this behaviour, it would require the introduce some form of a global manager for prepare image and teardown image. Thus, as this a major change, I think this should be considered for a future release. As an alternative, the workaround for this behaviour should be to download all templates and image layers separately. Could be done using the 'templates' entity in the API. I.e. /ovirt-engine/api/templates/<id>/diskattachments
Fair enough, moving to 4.4 and setting as an RFE
*** Bug 1728693 has been marked as a duplicate of this bug. ***
According to Daniel E. comment this is a major change In order to QE_ACK this we need a clear verification scenario. Can you please provide one? Is it only this one or is more required ? 1) Create two VMs (VM1, VM2) from template as dependent clones. 2) Download the all virtual disks for VM1 and start download for VM2 simultaneously. 3) Downloading template VM disks for VM2 will fail (initiate transfer API) will fail as disk is already locked.
*** Bug 1428395 has been marked as a duplicate of this bug. ***
This bug/RFE is more than 2 years old and it didn't get enough attention so far, and is now flagged as pending close. Please review if it is still relevant and provide additional details/justification/patches if you believe it should get more attention for the next oVirt release.
This is the upstream bug for bug 1879391. We should fix it both upstream and downstream.
QE doesn't have the capacity to verify this bug during 4.5.1.
Following the given steps to reproduce the issue, I couldn't reproduce it on a version without the fix: Versions: ovirt-engine-4.5.1-0.62.el8ev vdsm-4.50.1.1-1.el8ev Given steps to reproduce: " 1) Create two VMs (VM1, VM2) from template as dependent clones. 2) Download the all virtual disks for VM1 and start download for VM2 simultaneously. 3) Downloading template VM disks for VM2 will fail (initiate transfer API) will fail as disk is already locked. " I tried to do these steps on a non-fixed version just to see the different behavior on a fixed version. Maybe something in the steps is missing here?
(In reply to sshmulev from comment #15) > Following the given steps to reproduce the issue, I couldn't reproduce it on > a version without the fix: > > Versions: > ovirt-engine-4.5.1-0.62.el8ev > vdsm-4.50.1.1-1.el8ev > > Given steps to reproduce: > " > 1) Create two VMs (VM1, VM2) from template as dependent clones. > 2) Download the all virtual disks for VM1 and start download for VM2 > simultaneously. > 3) Downloading template VM disks for VM2 will fail (initiate transfer API) > will fail as disk is already locked. > " > > I tried to do these steps on a non-fixed version just to see the different > behavior on a fixed version. > Maybe something in the steps is missing here? As discussed, will be verified based on regression automation
Verified based on automation regression tests related to download disks and also the rest of the tests of all tiers. Versions: ovirt-engine-4.5.2-0.3.el8ev.noarch vdsm-4.50.2.1-1.el8ev.x86_64
This bugzilla is included in oVirt 4.5.2 release, published on August 10th 2022. Since the problem described in this bug report should be resolved in oVirt 4.5.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.