Description of problem: Add a disk to a vm with the required properties, the add fails with NullPointerException if no disk.storage_domains is provided (and previous version was not, also check the rsdl to see is not required) Version-Release number of selected component (if applicable): ovirt-engine-restapi-3.6.0-0.0.master.20150412172306.git55ba764.el6.noarch ovirt-engine-3.6.0-0.0.master.20150412172306.git55ba764.el6.noarch How reproducible: 100% Steps to Reproduce: POST /api/vms/4e0f8579-04a6-407d-acbc-d9ae17ca86d8/disks <disk> <alias>vm_334691_virtio_cow_True_disk</alias> <size>10737418240</size> <interface>virtio</interface> <format>cow</format> <sparse>true</sparse> <active>true</active> </disk> Actual results: 500 - NullPointerException Expected results: The operation succeeds (adding the disk to 'some' default storage domain, I think previous version was the master domain If there was some change I'm not aware for 3.6 about this operation then update the rsdl and also fail with a proper message "property disk.storage_domains is required" Additional info:
Actually this is a bug, since adding a direct lun disk also fails with the same error (500 - NullPointerException) unless the storage_domains.storage_domain property is provided
I think that this issue is a side effect of the following patch: restapi: support creation of Cinder disks https://gerrit.ovirt.org/39328 In particular there is a problem in the new code in the new code in the BackendVmDisksResource: https://gerrit.ovirt.org/#/c/39328/9/backend/manager/modules/restapi/jaxrs/src/main/java/org/ovirt/engine/api/restapi/resource/BackendVmDisksResource.java@182 The "id.toString()" call generates the NPE.
(In reply to Juan Hernández from comment #2) > I think that this issue is a side effect of the following patch: > > restapi: support creation of Cinder disks > https://gerrit.ovirt.org/39328 > > In particular there is a problem in the new code in the new code in the > BackendVmDisksResource: > > > https://gerrit.ovirt.org/#/c/39328/9/backend/manager/modules/restapi/jaxrs/ > src/main/java/org/ovirt/engine/api/restapi/resource/BackendVmDisksResource. > java@182 > > The "id.toString()" call generates the NPE. Pushed a patch to address the issue. Note that for disk images, storage domain is mandatory (see bug 1120768).
(In reply to Daniel Erez from comment #3) > (In reply to Juan Hernández from comment #2) > > I think that this issue is a side effect of the following patch: > > > > restapi: support creation of Cinder disks > > https://gerrit.ovirt.org/39328 > > > > In particular there is a problem in the new code in the new code in the > > BackendVmDisksResource: > > > > > > https://gerrit.ovirt.org/#/c/39328/9/backend/manager/modules/restapi/jaxrs/ > > src/main/java/org/ovirt/engine/api/restapi/resource/BackendVmDisksResource. > > java@182 > > > > The "id.toString()" call generates the NPE. > > Pushed a patch to address the issue. Note that for disk images, storage > domain is mandatory (see bug 1120768). Daniel: I have a question regarding bug 1120768. Adding a disk without specifying storage domain to a vm (without disks) fails with: [Cannot add Virtual Machine Disk. Storage Domain hasn't been specified.] but it doesn't fail if the vm already has attached another disk(s). Should this be considered also a bug since is not consistent?
(In reply to Carlos Mestre González from comment #4) > (In reply to Daniel Erez from comment #3) > > (In reply to Juan Hernández from comment #2) > > > I think that this issue is a side effect of the following patch: > > > > > > restapi: support creation of Cinder disks > > > https://gerrit.ovirt.org/39328 > > > > > > In particular there is a problem in the new code in the new code in the > > > BackendVmDisksResource: > > > > > > > > > https://gerrit.ovirt.org/#/c/39328/9/backend/manager/modules/restapi/jaxrs/ > > > src/main/java/org/ovirt/engine/api/restapi/resource/BackendVmDisksResource. > > > java@182 > > > > > > The "id.toString()" call generates the NPE. > > > > Pushed a patch to address the issue. Note that for disk images, storage > > domain is mandatory (see bug 1120768). > > Daniel: I have a question regarding bug 1120768. Adding a disk without > specifying storage domain to a vm (without disks) fails with: > > [Cannot add Virtual Machine Disk. Storage Domain hasn't been specified.] > > but it doesn't fail if the vm already has attached another disk(s). Should > this be considered also a bug since is not consistent? Do you mean that it doesn't fail if the VM already has some disks? In which domain the disk is created, same as the first disk?
Yes, disk is created in the same domain as the first disk. I checked also moving/deleting disks around: 1. create vm with disk in domain 1 2. add a new disk -> is added to domain 1 3. move the first disk disk to domain 2 4. add a new disk -> is still added to domain 1 5. remove first disk, add a new bootable disk to domain 3 6 add a new disk - -> is still added to domain 1 (master is in another domain)
(In reply to Carlos Mestre González from comment #6) > Yes, disk is created in the same domain as the first disk. I checked also > moving/deleting disks around: > > 1. create vm with disk in domain 1 > 2. add a new disk -> is added to domain 1 > 3. move the first disk disk to domain 2 > 4. add a new disk -> is still added to domain 1 > 5. remove first disk, add a new bootable disk to domain 3 > 6 add a new disk - -> is still added to domain 1 > > (master is in another domain) OK, so yes. This is needed for backwards compatibility when VM disks could reside only on a single storage domain - I.e. pre-MSD (Multiple Storage Domains) feature. So when no storage domain is specified, the first disk storage domain is used as a fallback.
*** Bug 1223482 has been marked as a duplicate of this bug. ***
What happens if we have no storage domain at all as when we are deploying hosted-engine? In the past (till to 3.5.z) it was working.
(In reply to Simone Tiraboschi from comment #9) > What happens if we have no storage domain at all as when we are deploying > hosted-engine? > In the past (till to 3.5.z) it was working. Hi Simon, If you have no active storage domain the response should be an appropriate canDo message. What is the request and response you've made?
Hi Daniel, on hosted-engine-setup, deploying on iSCSI (and now also on FC), we were adding the LUN were we deployed the engine VM as a direct LUN to prevent any other usages of that LUN which could destroy the engine itself. Please see: https://bugzilla.redhat.com/show_bug.cgi?id=1157243 https://bugzilla.redhat.com/show_bug.cgi?id=1157238 The hosted-engine storage domain is not managed by the engine and probably we couldn't have it for 3.6. Please see: https://bugzilla.redhat.com/show_bug.cgi?id=1215158 So, when hosted-engine-setup tries to add the LUN as a direct LUN no storage domain is know by the engine but it should still success. Maybe we could identify this LUN with a special flag to indicate that is the hosted-engine-one.
(In reply to Simone Tiraboschi from comment #11) > Hi Daniel, > on hosted-engine-setup, deploying on iSCSI (and now also on FC), we were > adding the LUN were we deployed the engine VM as a direct LUN to prevent any > other usages of that LUN which could destroy the engine itself. > Please see: > https://bugzilla.redhat.com/show_bug.cgi?id=1157243 > https://bugzilla.redhat.com/show_bug.cgi?id=1157238 > > The hosted-engine storage domain is not managed by the engine and probably > we couldn't have it for 3.6. > Please see: > https://bugzilla.redhat.com/show_bug.cgi?id=1215158 > > So, when hosted-engine-setup tries to add the LUN as a direct LUN no storage > domain is know by the engine but it should still success. > > Maybe we could identify this LUN with a special flag to indicate that is the > hosted-engine-one. Not sure I follow. This bug is merely for creating a disk from rest-api. If the error happens only in hosted-engine-setup flow, it should probably be addressed in a new bug. Can you please add the api request and the given response.
(In reply to Daniel Erez from comment #12) > Not sure I follow. This bug is merely for creating a disk from rest-api. Yes, we had it but it was closed as a duplicate of this one: https://bugzilla.redhat.com/show_bug.cgi?id=1223482 > If > the error happens only in hosted-engine-setup flow, it should probably be > addressed in a new bug. Can you please add the api request and the given > response. I'll try to reproduce on fresh master code and, in case, I'll reopen the 1223482
Ok, I tried with a fresh engine build from yesterday and it still reproduces. ovirt-engine.noarch 3.6.0-0.0.master.20150528052151.git92dca0e.el7.centos On hosted-engine side I'm getting: 2015-05-29 16:06:13 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_disk add_disk._closeup:153 Connecting to the Engine 2015-05-29 16:06:14 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_disk add_disk._closeup:199 Cannot add the Hosted Engine VM Disk to the engine Traceback (most recent call last): File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_disk.py", line 195, in _closeup engine_api.disks.add(disk) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 10651, in add headers={"Correlation-Id":correlation_id, "Expect":expect} File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 75, in add return self.request('POST', url, body, headers) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 115, in request persistent_auth=self.__persistent_auth File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request persistent_auth) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 154, in __do_request raise errors.RequestError(response_code, response_reason, response_body) RequestError: ^M status: 500^M reason: Internal^M detail: HTTP Status 500 2015-05-29 16:06:14 ERROR otopi.plugins.ovirt_hosted_engine_setup.engine.add_disk add_disk._closeup:202 Cannot add the Hosted Engine VM Disk to the engine 2015-05-29 16:06:14 DEBUG otopi.context context._executeMethod:155 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 145, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_disk.py", line 205, in _closeup _('Cannot add the Hosted Engine VM Disk to the engine') RuntimeError: Cannot add the Hosted Engine VM Disk to the engine I'm going to reopen 1223482 if it's not a duplicate of this one.
I'm doing: engine_api = ovirtsdk.api.API( url='https://{fqdn}/ovirt-engine/api'.format(fqdn=fqdn), username='admin@internal', password=password, ca_file=cafile, ) lun = ovirtsdk.xml.params.LogicalUnit( id='330000000e5380848', address='192.168.1.125', port=3260, target='iqn.2015-03.com.redhat:simone1', username='', password='', ) stype = 'iscsi' disk = ovirtsdk.xml.params.Disk( alias='hosted_engine', description='Hosted Engine Image', interface='virtio_scsi', sgio='unfiltered', format='raw', lun_storage=ovirtsdk.xml.params.Storage( type_=stype, logical_unit=[ lun, ], ), ) engine_api.disks.add(disk) engine_api.disconnect() and I'm getting: Traceback (most recent call last): File "test_1223482.py", line 41, in <module> engine_api.disks.add(disk) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 10651, in add headers={"Correlation-Id":correlation_id, "Expect":expect} File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 75, in add return self.request('POST', url, body, headers) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 115, in request persistent_auth=self.__persistent_auth File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request persistent_auth) File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 154, in __do_request raise errors.RequestError(response_code, response_reason, response_body) ovirtsdk.infrastructure.errors.RequestError: status: 500 reason: Internal detail: HTTP Status 500
in server.log I still found: Caused by: java.lang.NullPointerException at org.ovirt.engine.api.restapi.resource.BackendDisksResource.getStorageDomainById(BackendDisksResource.java:66) [restapi-jaxrs.jar:] at org.ovirt.engine.api.restapi.resource.BackendDisksResource.add(BackendDisksResource.java:38) [restapi-jaxrs.jar:] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.7.0_79] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [rt.jar:1.7.0_79] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_79] at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_79] at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:155) [resteasy-jaxrs-2.3.2.Final.jar:] at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:257) [resteasy-jaxrs-2.3.2.Final.jar:] at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:222) [resteasy-jaxrs-2.3.2.Final.jar:] at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:211) [resteasy-jaxrs-2.3.2.Final.jar:] at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:525) [resteasy-jaxrs-2.3.2.Final.jar:] ... 56 more
By the way, I saw that in patch 40826 you fixed backend/manager/modules/restapi/jaxrs/src/main/java/org/ovirt/engine/api/restapi/resource/BackendVmDisksResource.java while my NullPointerException comes from (*) backend/manager/modules/restapi/jaxrs/src/main/java/org/ovirt/engine/api/restapi/resource/BackendDisksResource.java * There is a missing 'VM' in the file/class name.
(In reply to Simone Tiraboschi from comment #17) > By the way, I saw that in patch 40826 you fixed > backend/manager/modules/restapi/jaxrs/src/main/java/org/ovirt/engine/api/ > restapi/resource/BackendVmDisksResource.java > while my NullPointerException comes from (*) > backend/manager/modules/restapi/jaxrs/src/main/java/org/ovirt/engine/api/ > restapi/resource/BackendDisksResource.java > > * There is a missing 'VM' in the file/class name. Pushed a new patch for that. Thanks Simone!
*** Bug 1225337 has been marked as a duplicate of this bug. ***
*** Bug 1215623 has been marked as a duplicate of this bug. ***
ovirt-3.6.0-3 release
Verified on oVirt 3.6
Hi Daniel, I was testing the last build[0] and something changed related to this (maybe). Adding a disk to a vm with already existing disks without the specifiying the sd fails: Detail: [Cannot add Virtual Machine Disk. Storage Domain hasn't been specified.] does this means that what you said in your comment #6 regarding adding disk without the sd for backwards compatibility is no longer relevant? If that's the case there's an issue though, the rsdl still shows the parameters are not need it: <parameter required="false" type="collection"><name>disk.storage_domains.storage_domain</name> from <link href="/api/vms/{vm:id}/disks" rel="add"> [0] ovirt-engine-3.6.0-0.0.master.20150726172446.git65db93d.el6.noarch
(In reply to Carlos Mestre González from comment #24) > Hi Daniel, > > I was testing the last build[0] and something changed related to this > (maybe). > > Adding a disk to a vm with already existing disks without the specifiying > the sd fails: > > Detail: [Cannot add Virtual Machine Disk. Storage Domain hasn't been > specified.] Right. Sent a patch to restore the auto-select behavior of a storage domain: https://gerrit.ovirt.org/#/c/44270/ @Juan - do you think it's worth keeping that backwards compatibility behavior? (note that MSD was introduced back in 3.1..) > > does this means that what you said in your comment #6 regarding adding disk > without the sd for backwards compatibility is no longer relevant? > > If that's the case there's an issue though, the rsdl still shows the > parameters are not need it: > > <parameter required="false" > type="collection"><name>disk.storage_domains.storage_domain</name> > > from <link href="/api/vms/{vm:id}/disks" rel="add"> > > [0] ovirt-engine-3.6.0-0.0.master.20150726172446.git65db93d.el6.noarch
(In reply to Daniel Erez from comment #25) > @Juan - do you think it's worth keeping that backwards compatibility > behavior? (note that MSD was introduced back in 3.1..) Yes, backwards compatibility is a must.
oVirt 3.6.0 has been released on November 4th, 2015 and should fix this issue. If problems still persist, please open a new BZ and reference this one.