Description of problem: When galance is configured to store the images in file, nova fails to properly copy that image into vcenter, resulting in /vmfs/volumes/5485966a-0fb158a9-daa3-d4ae52c6d8ea # du -h 8f716066-5838-4598-8aed-ace46d9c3259/* 512.0M 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259-1af5abfa.vswp 8.0K 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259-delta.vmdk 1.0M 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.nvram 0 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmdk 0 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmsd 8.0K 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmx 0 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmx.lck 0 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmxf 8.0K 8f716066-5838-4598-8aed-ace46d9c3259/8f716066-5838-4598-8aed-ace46d9c3259.vmx~ 1.0M 8f716066-5838-4598-8aed-ace46d9c3259/vmware.log 109.0M 8f716066-5838-4598-8aed-ace46d9c3259/vmx-8f716066-5838-4598-8aed-ace46d9c3259-452307962-1.vswp Notice the vmdk files have ~0B size. Version-Release number of selected component (if applicable): openstack-nova-compute-2015.1.0-16.el7ost.noarch openstack-glance-2015.1.0-6.el7ost.noarch How reproducible: 1/1 Steps to Reproduce: 1. ssh a.b.c.d '. keystonerc_admin; glance image-create --copy-from foobar.redhat.com/OpenStack/cirros-0.3.3.customized-x86_64-disk.vmdk --name cirros --container-format bare --disk-format vmdk' 2. nova boot --image cirros --flavor m1.tiny foo 3. Actual results: VM fails to boot. No bootable disk. Expected results: VM booted Additional info:
Setting high severity, because because of this together with BZ#1133566 we are not able to boot any image in vmware. I am 80% positive that this was working before. I will check that.
This happens with RHOS 6 as well (machine wont't boot, complaining about no bootable device), though the image was "cached" fine the first time I booted it using openstack | checksum | 216cf467f7a1ddda32d386b184b18eca | | container_format | bare | | disk_format | vmdk | | id | 51bc21ca-055a-4eef-a139-271672b54b0c | 216cf467f7a1ddda32d386b184b18eca /vmfs/volumes/5485966a-0fb158a9-daa3-d4ae52c6d8ea/10.34.69.130_base/51bc21ca-055a-4eef-a139-271672b54b0c/51bc21ca-055a-4eef-a139-271672b54b0c-flat.vmdk I found smelly that there are files containing ".1" after second boot attempt. It smells to me the caching algorithm is broken as I would expect the machines to use same base image as they are two attempts to boot the same image. $ ls /vmfs/volumes/5485966a-0fb158a9-daa3-d4ae52c6d8ea/X.Y.Z.A_base/51bc21ca-055a-4eef-a139-271672b54b0c/ 51bc21ca-055a-4eef-a139-271672b54b0c-flat.vmdk 51bc21ca-055a-4eef-a139-271672b54b0c.1.vmdk 51bc21ca-055a-4eef-a139-271672b54b0c.1-flat.vmdk 51bc21ca-055a-4eef-a139-271672b54b0c.vmdk I have found there is some way to check the vmdk. It looks ok, but I don't know what is it checking: /vmfs/volumes/5485966a-0fb158a9-daa3-d4ae52c6d8ea/cc80b8f2-889e-4f66-b40f-19fb9eeb6721 # vmkfstools -x check cc80b8f2-889e-4f66-b40f-19fb9eeb6721.v mdk Disk is error free But it seems that there is some problem with one of the "cached" images /vmfs/volumes/5485966a-0fb158a9-daa3-d4ae52c6d8ea/10.34.69.130_base/51bc21ca-055a-4eef-a139-271672b54b0c # ls | xargs -t -n1 vmkfstools -x check vmkfstools -x check 51bc21ca-055a-4eef-a139-271672b54b0c-flat.vmdk Disk is error free vmkfstools -x check 51bc21ca-055a-4eef-a139-271672b54b0c.1-flat.vmdk Disk needs repair. Disk has scary/unexplainable errors. vmkfstools -x check 51bc21ca-055a-4eef-a139-271672b54b0c.1.vmdk Disk is error free vmkfstools -x check 51bc21ca-055a-4eef-a139-271672b54b0c.vmdk Disk is error free Note that I am using the LSI driver.
Jaroslav - can you try booting the corresponding -flat file instead? The xx.vmdk file is the descriptior file and the xx-flat.vmdk file is the file that contains the actual bootable content.
First of all I don't understand what I was writing above. I must have meant that it fails when the vmware is used as the store. Second of all I have problems to reproduce because there is a new issues with certificates: říj 09 17:06:56 cloudimg.localdomain glance-api[25783]: /usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html říj 09 17:06:56 cloudimg.localdomain glance-api[25783]: InsecureRequestWarning) říj 09 17:06:56 cloudimg.localdomain glance-api[25783]: /usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:768: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html This is surely caused of some new version of urllib or some encryption related library. I had similar problem on fedora just couple of months before.
I workarounded the SSL isue. I am able to start glance with config: vmware_api_insecure=True stores=glance.store.filesystem.Store, glance.store.http.Store, glance.store.vmware_datastore.Store, default_store=vsphere vmware_datastores=New Datacenter:shared:1
Since I think I made a mistake describing that this fails when using file as a backend, I tried the vsphere as a backend but I cannot reproduce with openstack-glance-2015.1.1-3.el7ost.noarch python-oslo-vmware.noarch 0.11.1-1.el7ost @rhelosp-7.0-puddle because of ploading image data for image 6b3ea762-5006-429b-8140-37772b2f36b7 to vsphere store _upload /usr/lib/python2.7/site-packages/glance/api/v1/images.py:629 2015-10-09 18:11:54.658 31564 DEBUG oslo_messaging._drivers.amqp [req-8221d549-9844-4d80-93ca-b417bb9ea2d5 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] UNIQUE_ID is 115607153b96439bb99614fc2d43069f. _add_unique_id /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:258 2015-10-09 18:11:54.662 31564 DEBUG oslo_vmware.api [req-8221d549-9844-4d80-93ca-b417bb9ea2d5 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Waiting for function _invoke_api to return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121 2015-10-09 18:11:54.673 31564 DEBUG oslo_vmware.api [req-8221d549-9844-4d80-93ca-b417bb9ea2d5 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Checking if the current session: 36c6a is active. is_current_session_active /usr/lib/python2.7/site-packages/oslo_vmware/api.py:349 2015-10-09 18:11:54.692 31564 ERROR glance_store._drivers.vmware_datastore [req-8221d549-9844-4d80-93ca-b417bb9ea2d5 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Communication error sending http PUT requestto the url /folder/openstack_glance/6b3ea762-5006-429b-8140-37772b2f36b7%3FdcPath%3DNew%2BDatacenter%26dsName%3Ddatastore1. Got IOError [Errno 32] Broken pipe 2015-10-09 18:11:54.694 31564 ERROR glance.api.v1.upload_utils [req-8221d549-9844-4d80-93ca-b417bb9ea2d5 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Failed to upload image 6b3ea762-5006-429b-8140-37772b2f36b7 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils Traceback (most recent call last): 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py", line 113, in upload_data_to_store 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils context=req.context) 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 339, in store_add_to_backend 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils context=context) 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/capabilities.py", line 226, in op_checker 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils return store_op_fun(store, *args, **kwargs) 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/_drivers/vmware_datastore.py", line 534, in add 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils 'body': res.body}) 2015-10-09 18:11:54.694 31564 TRACE glance.api.v1.upload_utils AttributeError: HTTPResponse instance has no attribute 'b when doing glance image-create --copy-from http://srh-14/OpenStack/cirros-0.3.3.customized-x86_64-disk.vmdk --name cirros --container-format bare --disk-format vmdk
After removing the formatting of the res.body to the log, the result is 2015-10-09 18:47:51.227 34893 ERROR glance_store._drivers.vmware_datastore [req-4228c504-ebe8-4005-bbe8-5d0885fcfc01 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Communication error sending http PUT requestto the url /folder/openstack_glance/67e98d24-b5ac-4d4b-bed5-dcc7b8615123%3FdcPath%3DNew%2BDatacenter%26dsName%3Ddatastore1. Got IOError [Errno 32] Broken pipe 2015-10-09 18:47:51.228 34893 ERROR glance_store._drivers.vmware_datastore [req-4228c504-ebe8-4005-bbe8-5d0885fcfc01 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Failed to upload content of image 67e98d24-b5ac-4d4b-bed5-dcc7b8615123. The request returned an unexpected status: 301. 2015-10-09 18:47:51.228 34893 ERROR glance.api.v1.upload_utils [req-4228c504-ebe8-4005-bbe8-5d0885fcfc01 f64b26732e9e46e384b01b4d8546db46 23c5f8a744114c90b03d70d99703a555 - - -] Failed to upload image 67e98d24-b5ac-4d4b-bed5-dcc7b8615123 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils Traceback (most recent call last): 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance/api/v1/upload_utils.py", line 113, in upload_data_to_store 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils context=req.context) 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 339, in store_add_to_backend 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils context=context) 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/capabilities.py", line 226, in op_checker 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils return store_op_fun(store, *args, **kwargs) 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils File "/usr/lib/python2.7/site-packages/glance_store/_drivers/vmware_datastore.py", line 534, in add 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils raise exceptions.BackendException(msg) 2015-10-09 18:47:51.228 34893 TRACE glance.api.v1.upload_utils BackendException: Failed to upload content of image 67e98d24-b5ac-4d4b-bed5-dcc7b8615123. The request returned an unexpected status: 301. The response headers are: [('date', 'Fri, 9 Oct 2015 22:54:15 GMT'), ('connection', 'close'), ('content-type', 'text/html'), ('location', 'https://xxx/folder/openstack_glance/2715f05e-96a8-43b9-bec2-03705efd9dcb%3FdcPath%3DNew%2BDatacenter%26dsName%3Ddatastore1'), ('content-length', '56')] So the problem to reproduce is that if I use api_insecure = False, I have a cert issue. If I use api_insecure = True, there is that HTTP 301 -- redirect to HTTPS
Recategorizing to glance after discussion on the nova triage call.
- Has this been reported upstream? - What backport is being requested here? I can look into this but I'd appreciate some more info about the above. I'll poke some of the folks upstream.
Hi Flavio, I have no idea if someone here has reported upstream, I didn't. The backport we need is for OSP 6, customer usino Juno are facing this issue
Eduardo, You mentioned that c#4 referred to something that needs to be backported but there's no info on what backport is being requested. What patch needs to be backported? What's the exact issue we're talking about here? Not enough information has been provided to solve or investigate this issue and I'll proceed to close it. Feel free to re-open it if you manage to reproduce this issue or if you've more info on what exactly is being proposed for backport.