This bug was initially created as a copy of Bug #1838653 I am copying this bug because: Description of problem: [1] Glance API calls cinder to create a volume, volume is created but since image_size is 0 (presumably), glance_store seems to extend it starting at 2Gb. In this case, the qcow2 has a virtual_size of 8Gb. [2] cinder fails to resizebecause that would shrink the image. This is not reproducible with raw images, only qcow2. Version-Release number of selected component (if applicable): containers_ga-cinder-api 16.0-90 containers_ga-cinder-volume 16.0-90 containers_ga-glance-api 16.0-91 containers_ga-cinder-scheduler 16.0-92 How reproducible: All the time Steps to Reproduce: 1. Configure glance_store with cinder 2. upload qcow2 image Actual results: Doesn't work because we're trying to shrink the volume Expected results: Should work Additional info: [1] ~~~ 2020-05-18 13:45:08.589 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.224', 52042) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:08.591 34 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file Accept: */* process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:45 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:57 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v2 process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:69 2020-05-18 13:45:08.593 34 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:70 2020-05-18 13:45:08.685 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Cinderclient connection created for user glance using URL: http://192.168.66.207:5000/v3. get_cinderclient /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:375 2020-05-18 13:45:08.686 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Creating a new volume: image_size=0 size_gb=1 type=None add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:695 2020-05-18 13:45:08.686 34 INFO glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Since image size is zero, we will be doing resize-before-write for each GB which will be considerably slower than normal. 2020-05-18 13:45:10.332 25 DEBUG eventlet.wsgi.server [-] (25) accepted ('192.168.66.148', 34534) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.336 29 DEBUG eventlet.wsgi.server [-] (29) accepted ('192.168.66.224', 52126) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.337 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.201', 60758) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.338 25 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004603 2020-05-18 13:45:10.342 29 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004269 2020-05-18 13:45:10.344 34 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004926 2020-05-18 13:45:12.252 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> get_connector_properties: call "{'root_helper': 'sudo glance-rootwrap /etc/glance/rootwrap.conf', 'my_ip': 'txslst02nce-controller-0', 'multipath': False, 'enforce_multipath': False, 'host': None, 'execute': None}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.255 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.270 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.271 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('InitiatorName=iqn.1994-05.com.redhat:913cc589542f\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.275 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /sys/class/dmi/id/product_uuid execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.286 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /sys/class/dmi/id/product_uuid" returned: 0 in 0.011s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.286 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('2f772d41-0a46-4678-b173-76ddab9fe358\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.288 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== get_connector_properties: return (35ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': 'txslst02nce-controller-0', 'host': 'txslst02nce-controller-0', 'multipath': False, 'initiator': 'iqn.1994-05.com.redhat:913cc589542f', 'do_local_attach': False, 'system uuid': '2f772d41-0a46-4678-b173-76ddab9fe358'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:12.338 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.148', 34640) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.344 34 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004465 2020-05-18 13:45:12.344 31 DEBUG eventlet.wsgi.server [-] (31) accepted ('192.168.66.224', 52218) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.345 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 60844) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.350 31 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004284 2020-05-18 13:45:12.351 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004348 2020-05-18 13:45:12.742 34 DEBUG os_brick.initiator.connector [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Factory for nfs on None factory /usr/lib/python3.6/site-packages/os_brick/initiator/connector.py:279 2020-05-18 13:45:12.744 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> connect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.744 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): mount execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "mount" returned: 0 in 0.020s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.766 34 DEBUG os_brick.remotefs.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Already mounted: /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6 mount /usr/lib/python3.6/site-packages/os_brick/remotefs/remotefs.py:100 2020-05-18 13:45:12.767 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== connect_volume: return (22ms) {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:13.366 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:13.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.398s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 [...] 2020-05-18 13:45:30.763 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:31.212 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.449s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> disconnect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}, 'device_info': {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'}, 'force': False, 'ignore_errors': False}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== disconnect_volume: return (0ms) None trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:31.873 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Extending volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 to 2 GB. add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:733 2020-05-18 13:45:32.416 24 DEBUG eventlet.wsgi.server [-] (24) accepted ('192.168.66.148', 35566) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.417 28 DEBUG eventlet.wsgi.server [-] (28) accepted ('192.168.66.224', 53116) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.421 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 33612) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.422 24 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004758 2020-05-18 13:45:32.423 28 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004617 2020-05-18 13:45:32.427 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004280 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] The status of volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 is unexpected: status = error_extending, expected = available. 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to write to volume 9e718bf8-8fc4-42e6-be6e-7507425937d2.: glance_store.exceptions.StorageFull: There is not enough disk space on the image storage media. 2020-05-18 13:45:33.103 34 ERROR glance.api.v2.image_data [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPRequestEntityTooLarge: Image storage media is full: There is not enough disk space on the image storage media. 2020-05-18 13:45:34.026 34 INFO eventlet.wsgi.server [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] 192.168.66.224 - - [18/May/2020 13:45:34] "PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file HTTP/1.1" 413 444 25.434945 ~~~ [2] ~~~ 2020-05-18 13:45:32.196 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extending volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2. 2020-05-18 13:45:32.197 81 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:649 2020-05-18 13:45:32.198 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.353 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.155s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.356 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Resizing file to 2G 2020-05-18 13:45:32.357 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.399 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G" returned: 1 in 0.042s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.401 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] 'qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G' failed. Not Retrying. execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:457 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extend volume failed.: cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G Exit code: 1 Stdout: '' Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 801, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self._resize_image_file(path, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 655, in _resize_image_file 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager run_as_root=self._execute_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 334, in resize_image 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager utils.execute(*cmd, run_as_root=run_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cmd=sanitized_cmd) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n" 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager During handling of the above exception, another exception occurred: 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/manager.py", line 2733, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 807, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager raise exception.VolumeBackendAPIException(data=exception_msg) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager ~~~
*** Bug 2009278 has been marked as a duplicate of this bug. ***
Verified on: python3-glance-store-1.0.2-2.20220111043148.el8ost.noarch On a deployment with Glance over Cinder over Netapp NFS. Left Cinder's default raw volumes, Using this large image: (overcloud) [stack@undercloud-0 ~]$ qemu-img info windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 image: windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 file format: qcow2 virtual size: 12.2 GiB (13096714240 bytes) disk size: 11.2 GiB cluster_size: 65536 Format specific information: compat: 0.10 compression type: zlib refcount bits: 16 First attempt, lets upload to glance as a qcow2 (overcloud) [stack@undercloud-0 ~]$ glance image-create --name Windows.qcow2 --disk-format qcow2 --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 --progress [=============================>] 100% +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | a05ead3a04ae663da77eee5d2cb2fa73 | | container_format | bare | | created_at | 2022-02-15T09:21:40Z | | direct_url | cinder://f8eee30c-a120-4115-af35-22b984f9fec9 | | disk_format | qcow2 | | id | 3d8b315d-24b4-4ad9-90e3-b40031a07658 | | min_disk | 0 | | min_ram | 0 | | name | Windows.qcow2 | | os_hash_algo | sha512 | | os_hash_value | 9bd12698b1cb46e09243fd5704e14292e7393c84a4de178f536caaf21b9222c94d5080cbec69eafe | | | 69fd7a7694fe14d792425c5fbb89a89727d2d2615e62890a | | os_hidden | False | | owner | 9f5853b196334a2cba5b83259839ee4e | | protected | False | | size | 12001017856 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-02-15T09:25:14Z | | virtual_size | Not available | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ Great image uploaded successfully, due to the large disk size of 11.2 GiB, which is way larger than the default 1 GiB inital volume size we expected and as shown below on log indeed had to extend/resize the backing Cinder volume. Cinder volume log reports several resize cycles: root@controller-2 cinder]# grep -irn resize cinder-volume.log 8:2022-02-15 09:22:28.623 49 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-fb25f512-701c-4c9b-9d86-241ce7f51bc3 b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:651 72:2022-02-15 09:22:33.737 49 DEBUG oslo_concurrency.processutils [req-fb25f512-701c-4c9b-9d86-241ce7f51bc3 b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Running cmd (subprocess): qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-f8eee30c-a120-4115-af35-22b984f9fec9 2G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:379 73:2022-02-15 09:22:33.758 49 DEBUG oslo_concurrency.processutils [req-fb25f512-701c-4c9b-9d86-241ce7f51bc3 b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-f8eee30c-a120-4115-af35-22b984f9fec9 2G" returned: 0 in 0.021s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:416 ... 240:2022-02-15 09:24:36.416 49 DEBUG oslo_concurrency.processutils [req-d92ba8b2-45a1-45c7-a5cb-5999138ad0e7 b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-f8eee30c-a120-4115-af35-22b984f9fec9 10G" returned: 0 in 0.018s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:416 We can see the Cinder backing file (overcloud) [stack@undercloud-0 ~]$ cinder list --all +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+--------------------------------------+ | f8eee30c-a120-4115-af35-22b984f9fec9 | 32b79a86720d4031a4d559e3735f8957 | available | image-3d8b315d-24b4-4ad9-90e3-b40031a07658 | 12 | tripleo | false | | (overcloud) [stack@undercloud-0 ~]$ cinder show f8eee30c-a120-4115-af35-22b984f9fec9 +--------------------------------+--------------------------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-02-15T09:21:41.000000 | | description | None | | encrypted | False | | id | f8eee30c-a120-4115-af35-22b984f9fec9 | | metadata | glance_image_id : 3d8b315d-24b4-4ad9-90e3-b40031a07658 | | | image_owner : 9f5853b196334a2cba5b83259839ee4e | | | image_size : 12001017856 | | | readonly : True | | migration_status | None | | multiattach | False | | name | image-3d8b315d-24b4-4ad9-90e3-b40031a07658 | | os-vol-host-attr:host | hostgroup@tripleo_netapp_nfs#10.46.29.88:/cinder_nfs | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 32b79a86720d4031a4d559e3735f8957 | | readonly | True | | replication_status | None | | size | 12 | -> the volume size was extended from it's initial 1GiB to it's final 12GiB size. | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2022-02-15T09:25:14.000000 | | user_id | b51a3e29083d4c2c80ce923e2a4bf928 | | volume_type | tripleo | +--------------------------------+--------------------------------------------------------+ I know the bug is about qcow2, but I already have this large source image, lets retest with a raw source image just for fun. qemu-img convert -f qcow2 -O raw windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 windows_server_2012_r2_standard_eval_kvm_20170321.raw Upload to Glance (overcloud) [stack@undercloud-0 ~]$ glance image-create --name Windows.raw --disk-format raw --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.raw --progress [=============================>] 100% +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | ff159818151720930f5790aa65a36f06 | | container_format | bare | | created_at | 2022-02-15T09:39:52Z | | direct_url | cinder://cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 | | disk_format | raw | | id | 14db9402-e71f-4cfb-9d9e-f43335552175 | | min_disk | 0 | | min_ram | 0 | | name | Windows.raw | | os_hash_algo | sha512 | | os_hash_value | daff8b086e029322a410345f104e2cbaafd61e248827400d123c3dfe983f5890b5633d9dcc06eba5 | | | 9fc74c3eaacc047148c5c31c594a2db55c8712efae244555 | | os_hidden | False | | owner | 9f5853b196334a2cba5b83259839ee4e | | protected | False | | size | 13096714240 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-02-15T09:43:40Z | | virtual_size | Not available | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ Again on Cinder volume log we see the extend resize commands as expected. root@controller-2 cinder]# grep -irn resize cinder-volume.log 53:2022-02-15 09:40:39.237 49 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-2ce656bc-edbf-47c5-978c-dbe080980cdf b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:651 57:2022-02-15 09:40:44.372 49 DEBUG oslo_concurrency.processutils [req-2ce656bc-edbf-47c5-978c-dbe080980cdf b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Running cmd (subprocess): qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 2G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:379 58:2022-02-15 09:40:44.407 49 DEBUG oslo_concurrency.processutils [req-2ce656bc-edbf-47c5-978c-dbe080980cdf b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 2G" returned: 0 in 0.035s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:416 ... 269:2022-02-15 09:43:17.467 49 DEBUG oslo_concurrency.processutils [req-281704b6-716c-4e57-9794-279748ee26db b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 12G" returned: 0 in 0.026s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:416 283:2022-02-15 09:43:32.178 49 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-e1564fa8-8701-44ce-909f-3a8dde1c223b b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:651 287:2022-02-15 09:43:32.267 49 DEBUG oslo_concurrency.processutils [req-e1564fa8-8701-44ce-909f-3a8dde1c223b b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] Running cmd (subprocess): qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 13G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:379 288:2022-02-15 09:43:32.286 49 DEBUG oslo_concurrency.processutils [req-e1564fa8-8701-44ce-909f-3a8dde1c223b b51a3e29083d4c2c80ce923e2a4bf928 32b79a86720d4031a4d559e3735f8957 - default default] CMD "qemu-img resize -f raw /var/lib/cinder/mnt/de7de858b94dc496d8c1a0c52a2c79a8/volume-cfb45e30-bc87-499b-b71c-0e4a9e7e7cb6 13G" returned: 0 in 0.019s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:416 As we noticed above extend is no longer broken and works with both qcwo2 and raw, thus good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.2), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1001