Bug 1162756
Summary: | [engine-backend] [automation] AddDiskCommand throws a NullPointerException | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Elad <ebenahar> | ||||
Component: | ovirt-engine | Assignee: | Nobody <nobody> | ||||
Status: | CLOSED DUPLICATE | QA Contact: | |||||
Severity: | urgent | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 3.5.0 | CC: | amureini, dfediuck, ecohen, gchaplik, gklein, iheim, istein, lpeer, lsurette, mavital, rbalakri, Rhev-m-bugs, tnisan, vered, yeylon | ||||
Target Milestone: | --- | Keywords: | Triaged | ||||
Target Release: | 3.5.0 | ||||||
Hardware: | x86_64 | ||||||
OS: | Unspecified | ||||||
Whiteboard: | sla | ||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2014-12-09 12:33:44 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Elad
2014-11-11 16:06:26 UTC
I had this same error on another automatic test, run on 3.4 (av13): http://jenkins.qa.lab.tlv.redhat.com:8080/view/Compute/view/3.4-git/view/Virt/job/3.4-git-compute-virt-reg_vms-nfs/60/consoleFull It failed vm removal. Also had this same problem on another 3.4 test (av13), http://jenkins.qa.lab.tlv.redhat.com:8080/view/Compute/view/3.4-git/view/Virt/job/3.4-git-compute-virt-templates-nfs/71/consoleFull (see 18:16:35 Detail: [Cannot remove VM: Storage Domain cannot be accessed.) which failed VM removal. but on next test run, test PASS: http://jenkins.qa.lab.tlv.redhat.com:8080/view/Compute/view/3.4-git/view/Virt/job/3.4-git-compute-virt-templates-nfs/72/ setAndValidateDiskProfiles is an sla flow, moving to sla so the subject matter experts can investigate. Vered, the NPE in disk profiles caused because no SD was provided. this was solved in bug 1168525. Ilanit has mentioned in comment 1 and comment 2 the same issue occurs also in 3.4. I can close the bug as duplicate, but once you clear the NPE, I sense there's a storage issue or a problem with the test, what do you say? Hi Gilad, you can go ahead and close. This bug doesn't have enough info regarding the flow / reproduction before the NPE as it is. So even though you're probably right, we'll just have to get to it when we get a clear bug or stumble on it ourselves. Thanks for the heads up. These are the steps that were executed as part of this automation job: - Create a VM and insall OS - Attach a second RO disk to the VM and hotplug it - Kill qemu process of the VM - Start the VM again link to the test case in TCMS: https://tcms.engineering.redhat.com/case/334921/ The operation of disk creation failed as we can learn from the job console log: 04:49:01 DiskNotFound: Disk virtio_cow_True_disk was not found in vm's Global_vm_1 disk collection Console log: http://jenkins.qa.lab.tlv.redhat.com:8080/view/Storage/view/3.5/job/3.5-storage_read_only_disks-nfs/9/consoleFull According to Elad's comment, moving back to storage. The automation tries to deactivate an inactive disk: 04:41:22 2014-11-08 04:41:23,189 - MainThread - api_utils - ERROR - Failed to syncAction element: 04:41:22 Status: 409 04:41:22 Reason: Conflict 04:41:22 Detail: [Disk is already deactivated.] 04:41:22 In any event, this has nothing to do with the NPE that was originally reported. Elad - if there's a real, consistent, failure in the automation, please open a new bugs and supply the logs. Closing this one as a dup, as Gilad suggested. *** This bug has been marked as a duplicate of bug 1168525 *** |