This bug was initially created as a copy of Bug #1838653 Description of problem: [1] Glance API calls cinder to create a volume, volume is created but since image_size is 0 (presumably), glance_store seems to extend it starting at 2Gb. In this case, the qcow2 has a virtual_size of 8Gb. [2] cinder fails to resizebecause that would shrink the image. This is not reproducible with raw images, only qcow2. Version-Release number of selected component (if applicable): containers_ga-cinder-api 16.0-90 containers_ga-cinder-volume 16.0-90 containers_ga-glance-api 16.0-91 containers_ga-cinder-scheduler 16.0-92 How reproducible: All the time Steps to Reproduce: 1. Configure glance_store with cinder 2. upload qcow2 image Actual results: Doesn't work because we're trying to shrink the volume Expected results: Should work Additional info: [1] ~~~ 2020-05-18 13:45:08.589 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.224', 52042) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:08.591 34 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file Accept: */* process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:45 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:57 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v2 process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:69 2020-05-18 13:45:08.593 34 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:70 2020-05-18 13:45:08.685 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Cinderclient connection created for user glance using URL: http://192.168.66.207:5000/v3. get_cinderclient /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:375 2020-05-18 13:45:08.686 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Creating a new volume: image_size=0 size_gb=1 type=None add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:695 2020-05-18 13:45:08.686 34 INFO glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Since image size is zero, we will be doing resize-before-write for each GB which will be considerably slower than normal. 2020-05-18 13:45:10.332 25 DEBUG eventlet.wsgi.server [-] (25) accepted ('192.168.66.148', 34534) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.336 29 DEBUG eventlet.wsgi.server [-] (29) accepted ('192.168.66.224', 52126) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.337 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.201', 60758) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.338 25 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004603 2020-05-18 13:45:10.342 29 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004269 2020-05-18 13:45:10.344 34 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004926 2020-05-18 13:45:12.252 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> get_connector_properties: call "{'root_helper': 'sudo glance-rootwrap /etc/glance/rootwrap.conf', 'my_ip': 'txslst02nce-controller-0', 'multipath': False, 'enforce_multipath': False, 'host': None, 'execute': None}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.255 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.270 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.271 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('InitiatorName=iqn.1994-05.com.redhat:913cc589542f\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.275 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /sys/class/dmi/id/product_uuid execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.286 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /sys/class/dmi/id/product_uuid" returned: 0 in 0.011s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.286 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('2f772d41-0a46-4678-b173-76ddab9fe358\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.288 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== get_connector_properties: return (35ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': 'txslst02nce-controller-0', 'host': 'txslst02nce-controller-0', 'multipath': False, 'initiator': 'iqn.1994-05.com.redhat:913cc589542f', 'do_local_attach': False, 'system uuid': '2f772d41-0a46-4678-b173-76ddab9fe358'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:12.338 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.148', 34640) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.344 34 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004465 2020-05-18 13:45:12.344 31 DEBUG eventlet.wsgi.server [-] (31) accepted ('192.168.66.224', 52218) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.345 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 60844) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.350 31 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004284 2020-05-18 13:45:12.351 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004348 2020-05-18 13:45:12.742 34 DEBUG os_brick.initiator.connector [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Factory for nfs on None factory /usr/lib/python3.6/site-packages/os_brick/initiator/connector.py:279 2020-05-18 13:45:12.744 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> connect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.744 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): mount execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "mount" returned: 0 in 0.020s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.766 34 DEBUG os_brick.remotefs.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Already mounted: /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6 mount /usr/lib/python3.6/site-packages/os_brick/remotefs/remotefs.py:100 2020-05-18 13:45:12.767 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== connect_volume: return (22ms) {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:13.366 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:13.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.398s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 [...] 2020-05-18 13:45:30.763 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:31.212 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.449s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> disconnect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}, 'device_info': {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'}, 'force': False, 'ignore_errors': False}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== disconnect_volume: return (0ms) None trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:31.873 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Extending volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 to 2 GB. add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:733 2020-05-18 13:45:32.416 24 DEBUG eventlet.wsgi.server [-] (24) accepted ('192.168.66.148', 35566) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.417 28 DEBUG eventlet.wsgi.server [-] (28) accepted ('192.168.66.224', 53116) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.421 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 33612) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.422 24 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004758 2020-05-18 13:45:32.423 28 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004617 2020-05-18 13:45:32.427 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004280 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] The status of volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 is unexpected: status = error_extending, expected = available. 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to write to volume 9e718bf8-8fc4-42e6-be6e-7507425937d2.: glance_store.exceptions.StorageFull: There is not enough disk space on the image storage media. 2020-05-18 13:45:33.103 34 ERROR glance.api.v2.image_data [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPRequestEntityTooLarge: Image storage media is full: There is not enough disk space on the image storage media. 2020-05-18 13:45:34.026 34 INFO eventlet.wsgi.server [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] 192.168.66.224 - - [18/May/2020 13:45:34] "PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file HTTP/1.1" 413 444 25.434945 ~~~ [2] ~~~ 2020-05-18 13:45:32.196 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extending volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2. 2020-05-18 13:45:32.197 81 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:649 2020-05-18 13:45:32.198 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.353 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.155s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.356 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Resizing file to 2G 2020-05-18 13:45:32.357 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.399 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G" returned: 1 in 0.042s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.401 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] 'qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G' failed. Not Retrying. execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:457 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extend volume failed.: cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G Exit code: 1 Stdout: '' Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 801, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self._resize_image_file(path, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 655, in _resize_image_file 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager run_as_root=self._execute_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 334, in resize_image 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager utils.execute(*cmd, run_as_root=run_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cmd=sanitized_cmd) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n" 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager During handling of the above exception, another exception occurred: 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/manager.py", line 2733, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 807, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager raise exception.VolumeBackendAPIException(data=exception_msg) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager ~~~
Verified on: python3-glance-store-1.0.2-1.20220110223456.el8ost.noarch On a deployment with Glance over Cinder over Netapp NFS. Left Cinder's default raw volumes, Using this large image: (overcloud) [stack@undercloud-0 ~]$ qemu-img info windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 image: windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 file format: qcow2 virtual size: 12.2 GiB (13096714240 bytes) disk size: 11.2 GiB cluster_size: 65536 Format specific information: compat: 0.10 compression type: zlib refcount bits: 16 First attempt, lets upload to glance as a qcow2 (overcloud) [stack@undercloud-0 ~]$ glance image-create --name Windows.qcow2 --disk-format qcow2 --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 --progress [=============================>] 100% +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | a05ead3a04ae663da77eee5d2cb2fa73 | | container_format | bare | | created_at | 2022-03-07T14:41:41Z | | direct_url | cinder://6dc44b30-20ce-4c6b-9f94-11edc235a11b | | disk_format | qcow2 | | id | 79934905-be78-49f0-8604-fd45dbe87f13 | | min_disk | 0 | | min_ram | 0 | | name | Windows.qcow2 | | os_hash_algo | sha512 | | os_hash_value | 9bd12698b1cb46e09243fd5704e14292e7393c84a4de178f536caaf21b9222c94d5080cbec69eafe | | | 69fd7a7694fe14d792425c5fbb89a89727d2d2615e62890a | | os_hidden | False | | owner | 8cd92774d2a24296b519b8f6382781c6 | | protected | False | | size | 12001017856 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-03-07T14:45:16Z | | virtual_size | Not available | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list --all +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 6dc44b30-20ce-4c6b-9f94-11edc235a11b | 2332dec9e9d34e51bc0a9e7e57930439 | available | image-79934905-be78-49f0-8604-fd45dbe87f13 | 12 | tripleo | false | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ Great image uploaded successfully, due to the image's large disk size of 11.2 GiB, which is way larger than the default 1 GiB initial volume size, we expected and as shown below on log indeed had to extend/resize the backing Cinder volume several time again and again. Cinder volume log reports several resize cycles: root@controller-2 cinder]# grep -irn resize cinder-volume.log 7386:2022-03-07 14:44:07.295 47 DEBUG oslo_concurrency.processutils [req-a695ff01-6346-4821-9ec4-304a48e378a3 074c1b76b6f74ca1b0cfd85192181f00 2332dec9e9d34e51bc0a9e7e57930439 - default default] Running cmd (subprocess): qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-6dc44b30-20ce-4c6b-9f94-11edc235a11b 8G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 7387:2022-03-07 14:44:07.314 47 DEBUG oslo_concurrency.processutils [req-a695ff01-6346-4821-9ec4-304a48e378a3 074c1b76b6f74ca1b0cfd85192181f00 2332dec9e9d34e51bc0a9e7e57930439 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-6dc44b30-20ce-4c6b-9f94-11edc235a11b 8G" returned: 0 in 0.019s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 7408:2022-03-07 14:44:22.585 47 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-acfd0a0a-fb5a-437c-a523-51b317ecfe8b 074c1b76b6f74ca1b0cfd85192181f00 2332dec9e9d34e51bc0a9e7e57930439 - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:651 7412:2022-03-07 14:44:22.666 47 DEBUG oslo_concurrency.processutils [req-acfd0a0a-fb5a-437c-a523-51b317ecfe8b 074c1b76b6f74ca1b0cfd85192181f00 2332dec9e9d34e51bc0a9e7e57930439 - default default] Running cmd (subprocess): qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-6dc44b30-20ce-4c6b-9f94-11edc235a11b 9G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 7413:2022-03-07 14:44:22.685 47 DEBUG oslo_concurrency.processutils [req-acfd0a0a-fb5a-437c-a523-51b317ecfe8b 074c1b76b6f74ca1b0cfd85192181f00 2332dec9e9d34e51bc0a9e7e57930439 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-6dc44b30-20ce-4c6b-9f94-11edc235a11b 9G" returned: 0 in 0.019s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 I know the bug was about uploading about qcow2, but I already have this large source image, lets retest with a raw source image just for fun. qemu-img convert -f qcow2 -O raw windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 windows_server_2012_r2_standard_eval_kvm_20170321.raw (overcloud) [stack@undercloud-0 ~]$ glance image-create --name Windows.raw --disk-format raw --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.raw --progress [=============================>] 100% +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | ff159818151720930f5790aa65a36f06 | | container_format | bare | | created_at | 2022-03-07T14:47:51Z | | direct_url | cinder://5250e594-0488-46d7-a94c-0a94a6527a0c | | disk_format | raw | | id | 727d9f61-66c7-4cc3-948d-69e6a28876ad | | min_disk | 0 | | min_ram | 0 | | name | Windows.raw | | os_hash_algo | sha512 | | os_hash_value | daff8b086e029322a410345f104e2cbaafd61e248827400d123c3dfe983f5890b5633d9dcc06eba5 | | | 9fc74c3eaacc047148c5c31c594a2db55c8712efae244555 | | os_hidden | False | | owner | 8cd92774d2a24296b519b8f6382781c6 | | protected | False | | size | 13096714240 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-03-07T14:51:04Z | | virtual_size | Not available | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list --all +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 5250e594-0488-46d7-a94c-0a94a6527a0c | 2332dec9e9d34e51bc0a9e7e57930439 | available | image-727d9f61-66c7-4cc3-948d-69e6a28876ad | 13 | tripleo | false | | | 6dc44b30-20ce-4c6b-9f94-11edc235a11b | 2332dec9e9d34e51bc0a9e7e57930439 | available | image-79934905-be78-49f0-8604-fd45dbe87f13 | 12 | tripleo | false | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ Again successfully upload to Glance over cinder over netapp nfs. As we noted above extend is no longer broken and works with both qcow2 as well as raw images, good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.8 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0986