Bug 1416386 - hotplug_disk test is failing due to deactivate Logical Volume failure
Summary: hotplug_disk test is failing due to deactivate Logical Volume failure
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: future
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Allon Mureinik
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-25 12:10 UTC by rhev-integ
Modified: 2017-03-27 11:23 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-02-13 11:43:07 UTC
oVirt Team: Storage
Embargoed:
eedri: planning_ack?
eedri: devel_ack?
eedri: testing_ack?


Attachments (Terms of Use)
Logs of engine and vdsm for that suite of tests (1.58 MB, application/x-gzip)
2017-01-25 12:10 UTC, rhev-integ
no flags Details

Description rhev-integ 2017-01-25 12:10:52 UTC
Created attachment 1244224 [details]
Logs of engine and vdsm for that suite of tests

Description of problem:
hotplug_disk test is failing due to deactivate Logical Volume failure

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Exception in engine logs is:
{"jsonrpc": "2.0", "id": "4b8eb810-c52d-4c92-a792-e554f87c9493", "error": {"message": "Cannot deactivate Logical Volume: ('General Storage Exception: (\"5 [] [\\'  WARNING: Not using lvmetad because config setting use_lvmetad=0.\\', \\'  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\\', \\'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5 in use.\\', \\'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb in use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\', \\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}�
2017-01-25 05:05:49,037-05 DEBUG [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] Message received: {"jsonrpc": "2.0", "id": "4b8eb810-c52d-4c92-a792-e554f87c9493", "error": {"message": "Cannot deactivate Logical Volume: ('General Storage Exception: (\"5 [] [\\'  WARNING: Not using lvmetad because config setting use_lvmetad=0.\\', \\'  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\\', \\'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5 in use.\\', \\'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb in use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\', \\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}
2017-01-25 05:05:49,047-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [59ab00f1] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  WARNING: Not using lvmetad because config setting use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5 in use.\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\', \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
2017-01-25 05:05:49,047-05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler5) [59ab00f1] Command 'TeardownImageVDSCommand(HostName = lago-basic-suite-master-host0, ImageActionsVDSCommandParameters:{runAsync='true', hostId='60e52527-b637-445c-b408-0275d347e76a'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  WARNING: Not using lvmetad because config setting use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5 in use.\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\', \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
2017-01-25 05:05:49,047-05 DEBUG [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler5) [59ab00f1] Exception: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  WARNING: Not using lvmetad because config setting use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan devices to make changes visible (pvscan --cache).\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5 in use.\', \'  Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\', \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
	at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:182) [vdsbroker.jar:]
	at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20) [vdsbroker.jar:]
	at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111) [vdsbroker.jar:]
	at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73) [vdsbroker.jar:]
	at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) [dal.jar:]
	at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) [vdsbroker.jar:]
	at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:407) [vdsbroker.jar:]
	at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.teardownImage(ImagesHandler.java:1007) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getQemuImageInfoFromVdsm(ImagesHandler.java:856) [bll.jar:]
	at org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.endSuccessfully(BaseImagesCommand.java:367) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:736) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:694) [bll.jar:]
	at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2057) [bll.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:]
	at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:]
	at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:559) [bll.jar:]
	at org.ovirt.engine.core.bll.Backend.endAction(Backend.java:536) [bll.jar:]

Comment 1 rhev-integ 2017-01-25 12:34:09 UTC
Link to Jenkins build:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4951/

Comment 2 Tal Nisan 2017-01-25 13:58:38 UTC
Eyal, please tell your team not to open bugs under rhev-integ but only under their users otherwise this leaves us without a point of contact when needed

Comment 3 Eyal Edri 2017-01-25 14:05:16 UTC
Fair enough, I'll make sure to communicate it.
Thanks for raising it, its probably just saved in bugzilla cache from
previous login.

Comment 4 gshinar 2017-01-25 14:22:04 UTC
Sorry for using rhev-integ
I've opened the bug

Comment 5 Tal Nisan 2017-01-25 16:52:27 UTC
Seems related to the lvmetad bugs as the lv is in use:

Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use

Can you please try to run the tests again with disabling lvmetad service?

Also regardless, please make sure to fill in the "how reproducible" thing, it's useful for us to know whether it fails every run of the tests or sporadically

Comment 6 Tal Nisan 2017-02-13 11:43:07 UTC
Gil, the lvmetad will be disabled automatically in the new VDSM build and will most likely solve this bug, if you see that it will happen again in the system tests please reopen this bug

Comment 7 gshinar 2017-03-27 11:23:33 UTC
Great 

Thanks


Note You need to log in before you can comment on or make changes to this bug.