Bug 1510957

Summary: Avoid preparing and tearing down images during create snapshot while the VM isn't down
Product: [oVirt] ovirt-engine Reporter: Ala Hino <ahino>
Component: BLL.StorageAssignee: Ala Hino <ahino>
Status: CLOSED CURRENTRELEASE QA Contact: Raz Tamir <ratamir>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1.2CC: amureini, bugs
Target Milestone: ovirt-4.2.0Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-20 11:10:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ala Hino 2017-11-08 13:29:03 UTC
Description of problem:
If the VM is paused during creating a snapshot, there is an attempt to tear down the image. Although the exception is ignored, and the operation isn't failed, we shouldn't attempt to tear down images if the VM isn't down.

How reproducible:
Occasionally if the VM is paused during a create snapshot operation

Steps to Reproduce:
1. Create a snapshot while the VM is up

Actual results:
The snapshot is successfully created; however, the volume is tore down. Following exception can be seen in the logs:

Engine:

2017-11-05 00:05:54,749+02 WARN  [org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler] (DefaultQuartzScheduler10) [snapshots_create_348e3da8-9f44-482f] Unable to tear down image: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/8a5b1082-5c18-4685-9671-0a233e22af1e in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/23f403d2-3122-48c9-b024-983fef2a2d1d in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/96b8ff51-6f78-4d45-8ec5-2c34869ea3b0 in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/c704290f-0be7-45e2-a223-a9da81ffd605 in use.\']\\n5ca8f689-f221-4bb9-a40d-427e0842b5e0/[\'c704290f-0be7-45e2-a223-a9da81ffd605\', \'23f403d2-3122-48c9-b024-983fef2a2d1d\', \'8a5b1082-5c18-4685-9671-0a233e22af1e\', \'96b8ff51-6f78-4d45-8ec5-2c34869ea3b0\']",)',) (Failed with error CannotDeactivateLogicalVolume and code 552)
        at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:]
        at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.teardownImage(ImagesHandler.java:1019) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getQemuImageInfoFromVdsm(ImagesHandler.java:860) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.setQcowCompat(BaseImagesCommand.java:432) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.setQcowCompatByQemuImageInfo(BaseImagesCommand.java:416) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.endSuccessfully(BaseImagesCommand.java:369) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:736) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:694) [bll.jar:]


Vdsm:
2017-11-05 00:05:53,729+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH teardownImage error=Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 5ca8f689-f221-4
bb9-a40d-427e0842b5e0/8a5b1082-5c18-4685-9671-0a233e22af1e in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/23f403d2-3122-48c9-b024-983fef2a2d1d in use.\', \'  Logical volu
me 5ca8f689-f221-4bb9-a40d-427e0842b5e0/96b8ff51-6f78-4d45-8ec5-2c34869ea3b0 in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/c704290f-0be7-45e2-a223-a9da81ffd605 in use.\'
]\\n5ca8f689-f221-4bb9-a40d-427e0842b5e0/[\'c704290f-0be7-45e2-a223-a9da81ffd605\', \'23f403d2-3122-48c9-b024-983fef2a2d1d\', \'8a5b1082-5c18-4685-9671-0a233e22af1e\', \'96b8ff51-6f78-4d45-8
ec5-2c34869ea3b0\']",)',) from=::ffff:10.35.69.94,46012, flow_id=snapshots_create_348e3da8-9f44-482f, task_id=ff0b6860-c43f-4423-8d6e-a898def7a001 (api:50)
2017-11-05 00:05:53,730+0200 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='ff0b6860-c43f-4423-8d6e-a898def7a001') Unexpected error (task:872)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 879, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in teardownImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
    ret = func(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3189, in teardownImage
    dom.deactivateImage(imgUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1289, in deactivateImage
    lvm.deactivateLVs(self.sdUUID, volUUIDs)
  File "/usr/share/vdsm/storage/lvm.py", line 1304, in deactivateLVs
    _setLVAvailability(vgName, toDeactivate, "n")
  File "/usr/share/vdsm/storage/lvm.py", line 843, in _setLVAvailability
    raise error(str(e))
CannotDeactivateLogicalVolume: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/8a5b1082-5c18-4685-9671-0a233e22
af1e in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/23f403d2-3122-48c9-b024-983fef2a2d1d in use.\', \'  Logical volume 5ca8f689-f221-4bb9-a40d-427e0842b5e0/96b8ff51-6f78-


Expected results:
No attempt to tear down the image

Comment 1 Raz Tamir 2017-11-15 12:03:22 UTC
Verified on ovirt engine-4.2.0-0.0.master.20171113223918.git25568c3.el7.centos

Run VM with 2 disks and dd from disk with OS to second and created snapshot during dd.
The VM paused but the issue described wasn't observed

Comment 2 Sandro Bonazzola 2017-12-20 11:10:11 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.