Bug 1657762
Summary: | Extend volume failed with error "VM <vm_id> has been paused due to no Storage space error." and "VM <vm_id> is down with error. Exit message: Lost connection with qemu process." | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Yosi Ben Shimon <ybenshim> | ||||||||
Component: | BLL.Storage | Assignee: | Nobody <nobody> | ||||||||
Status: | CLOSED DEFERRED | QA Contact: | Avihai <aefrat> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 4.2.8 | CC: | aefrat, bugs, eshenitz, tnisan | ||||||||
Target Milestone: | --- | Keywords: | Automation | ||||||||
Target Release: | --- | ||||||||||
Hardware: | x86_64 | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2021-09-29 11:33:06 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Yosi Ben Shimon
2018-12-10 11:52:16 UTC
Same happened again in other test case when extending the disk. Errors a bit different but Looks like it's related to the same issue. From the engine log: 2018-12-10 13:45:40,367+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM '460e606c-fb09-4a21-aab8-6165490ebeb9'(vm_TestCase5061_1013432638) moved from 'Up' --> 'Paused' 2018-12-10 13:45:40,386+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: VM_PAUSED(1,025), VM vm_TestCase5061_1013432638 has been paused. From the VDSM log: 2018-12-10 13:45:55,425+0200 ERROR (jsonrpc/2) [jsonrpc.JsonRpcServer] Internal server error (__init__:611) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in _dynamicMethod result = fn(*methodArgs) File "<string>", line 2, in dumpxmls File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1430, in dumpxmls for vmId in vmList} File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1430, in <dictcomp> for vmId in vmList} KeyError: u'460e606c-fb09-4a21-aab8-6165490ebeb9' Attaching logs for this TC execution (logs_2.zip) Created attachment 1513679 [details]
logs_2
Steps to reproduce according to the test cases: TestCase5063 (the 1st test case - bug Description): 1. Create VM and start it 2. Add thin provision qcow disk of size 1G and attach to the VM 3. Extend the disk by 1G 4. Start to write using dd 5. Check that the disk is actually growing TestCase5061 (2nd test case - comment #1): 1. Create VM and start it 2. Add multiple disk permutation of size 1G, attach them to the VM and activate 3. Create snapshot (with memory) 4. Extend each disk by 1G (synchronized) This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1. Another reproduction of this issue on rhv-4.2.8-7(ovirt-engine-4.2.8.3-0.1.el7ev.noarch) automation tier1 TestCase5063. Logs are attached. Engine log: 2019-02-10 06:57:34,272+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-7) [] EVENT_ID: VM_PAUSED_ENOSPC(138), VM vm_TestCase5063_1006540117 has been paused due to no Storage space error. 2019-02-10 06:57:50,637+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] VM '66596aa9-4775-4416-94ed-abf01f7dd178' was reported as Down on VDS '950c516f-2c3e-4943-8525-b0a519997293'(host_mixed_2) 2019-02-10 06:57:50,638+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] START, DestroyVDSCommand(HostName = host_mixed_2, DestroyVmVDSCommandParameters:{hostId='950c516f-2c3e-4943-8525-b0a519997293', vmId='66596aa9-4775-4416-94ed-abf01f7dd178', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 69d79ebd 2019-02-10 06:57:50,644+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] FINISH, DestroyVDSCommand, log id: 69d79ebd 2019-02-10 06:57:50,644+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] VM '66596aa9-4775-4416-94ed-abf01f7dd178'(vm_TestCase5063_1006540117) moved from 'Paused' --> 'Down' 2019-02-10 06:57:50,661+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm_TestCase5063_1006540117 is down with error. Exit message: Lost connection with qemu process. Created attachment 1535639 [details]
rhv-4.2.8.7_TestCase5063_logs
This bug/RFE is more than 2 years old and it didn't get enough attention so far, and is now flagged as pending close. Please review if it is still relevant and provide additional details/justification/patches if you believe it should get more attention for the next oVirt release. This bug didn't get any attention in a long time, and it's not planned in foreseeable future. oVirt development team has no plans to work on it. Please feel free to reopen if you have a plan how to contribute this feature/bug fix. |