Description of problem: [1] Glance API calls cinder to create a volume, volume is created but since image_size is 0 (presumably), glance_store seems to extend it starting at 2Gb. In this case, the qcow2 has a virtual_size of 8Gb. [2] cinder fails to resizebecause that would shrink the image. This is not reproducible with raw images, only qcow2. Version-Release number of selected component (if applicable): containers_ga-cinder-api 16.0-90 containers_ga-cinder-volume 16.0-90 containers_ga-glance-api 16.0-91 containers_ga-cinder-scheduler 16.0-92 How reproducible: All the time Steps to Reproduce: 1. Configure glance_store with cinder 2. upload qcow2 image Actual results: Doesn't work because we're trying to shrink the volume Expected results: Should work Additional info: [1] ~~~ 2020-05-18 13:45:08.589 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.224', 52042) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:08.591 34 DEBUG glance.api.middleware.version_negotiation [-] Determining version of request: PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file Accept: */* process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:45 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Using url versioning process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:57 2020-05-18 13:45:08.592 34 DEBUG glance.api.middleware.version_negotiation [-] Matched version: v2 process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:69 2020-05-18 13:45:08.593 34 DEBUG glance.api.middleware.version_negotiation [-] new path /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file process_request /usr/lib/python3.6/site-packages/glance/api/middleware/version_negotiation.py:70 2020-05-18 13:45:08.685 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Cinderclient connection created for user glance using URL: http://192.168.66.207:5000/v3. get_cinderclient /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:375 2020-05-18 13:45:08.686 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Creating a new volume: image_size=0 size_gb=1 type=None add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:695 2020-05-18 13:45:08.686 34 INFO glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Since image size is zero, we will be doing resize-before-write for each GB which will be considerably slower than normal. 2020-05-18 13:45:10.332 25 DEBUG eventlet.wsgi.server [-] (25) accepted ('192.168.66.148', 34534) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.336 29 DEBUG eventlet.wsgi.server [-] (29) accepted ('192.168.66.224', 52126) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.337 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.201', 60758) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:10.338 25 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004603 2020-05-18 13:45:10.342 29 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004269 2020-05-18 13:45:10.344 34 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:10] "GET /healthcheck HTTP/1.0" 200 137 0.004926 2020-05-18 13:45:12.252 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> get_connector_properties: call "{'root_helper': 'sudo glance-rootwrap /etc/glance/rootwrap.conf', 'my_ip': 'txslst02nce-controller-0', 'multipath': False, 'enforce_multipath': False, 'host': None, 'execute': None}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.255 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.270 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.271 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('InitiatorName=iqn.1994-05.com.redhat:913cc589542f\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.272 34 DEBUG os_brick.initiator.linuxfc [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] No Fibre Channel support detected on system. get_fc_hbas /usr/lib/python3.6/site-packages/os_brick/initiator/linuxfc.py:134 2020-05-18 13:45:12.275 29557 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /sys/class/dmi/id/product_uuid execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.286 29557 DEBUG oslo_concurrency.processutils [-] CMD "cat /sys/class/dmi/id/product_uuid" returned: 0 in 0.011s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.286 29557 DEBUG oslo.privsep.daemon [-] privsep: reply[139807552648912]: (4, ('2f772d41-0a46-4678-b173-76ddab9fe358\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:475 2020-05-18 13:45:12.288 34 DEBUG os_brick.utils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== get_connector_properties: return (35ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': 'txslst02nce-controller-0', 'host': 'txslst02nce-controller-0', 'multipath': False, 'initiator': 'iqn.1994-05.com.redhat:913cc589542f', 'do_local_attach': False, 'system uuid': '2f772d41-0a46-4678-b173-76ddab9fe358'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:12.338 34 DEBUG eventlet.wsgi.server [-] (34) accepted ('192.168.66.148', 34640) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.344 34 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004465 2020-05-18 13:45:12.344 31 DEBUG eventlet.wsgi.server [-] (31) accepted ('192.168.66.224', 52218) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.345 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 60844) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:12.350 31 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004284 2020-05-18 13:45:12.351 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:12] "GET /healthcheck HTTP/1.0" 200 137 0.004348 2020-05-18 13:45:12.742 34 DEBUG os_brick.initiator.connector [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Factory for nfs on None factory /usr/lib/python3.6/site-packages/os_brick/initiator/connector.py:279 2020-05-18 13:45:12.744 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> connect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:12.744 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): mount execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:12.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "mount" returned: 0 in 0.020s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:12.766 34 DEBUG os_brick.remotefs.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Already mounted: /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6 mount /usr/lib/python3.6/site-packages/os_brick/remotefs/remotefs.py:100 2020-05-18 13:45:12.767 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== connect_volume: return (22ms) {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'} trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:13.366 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:13.764 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 42415 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.398s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 [...] 2020-05-18 13:45:30.763 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Running cmd (subprocess): sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:31.212 34 DEBUG oslo_concurrency.processutils [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] CMD "sudo glance-rootwrap /etc/glance/rootwrap.conf chown 65534 /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.449s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] ==> disconnect_volume: call "{'self': <os_brick.initiator.connectors.remotefs.RemoteFsConnector object at 0x7f277b5f2a90>, 'connection_properties': {'export': '192.168.76.99:/stack2_nfs_2', 'name': 'volume-9e718bf8-8fc4-42e6-be6e-7507425937d2', 'options': None, 'format': 'raw', 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False}, 'device_info': {'path': '/var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2'}, 'force': False, 'ignore_errors': False}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146 2020-05-18 13:45:31.366 34 DEBUG os_brick.initiator.connectors.remotefs [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] <== disconnect_volume: return (0ms) None trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170 2020-05-18 13:45:31.873 34 DEBUG glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Extending volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 to 2 GB. add /usr/lib/python3.6/site-packages/glance_store/_drivers/cinder.py:733 2020-05-18 13:45:32.416 24 DEBUG eventlet.wsgi.server [-] (24) accepted ('192.168.66.148', 35566) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.417 28 DEBUG eventlet.wsgi.server [-] (28) accepted ('192.168.66.224', 53116) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.421 26 DEBUG eventlet.wsgi.server [-] (26) accepted ('192.168.66.201', 33612) server /usr/lib/python3.6/site-packages/eventlet/wsgi.py:985 2020-05-18 13:45:32.422 24 INFO eventlet.wsgi.server [-] 192.168.66.148 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004758 2020-05-18 13:45:32.423 28 INFO eventlet.wsgi.server [-] 192.168.66.224 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004617 2020-05-18 13:45:32.427 26 INFO eventlet.wsgi.server [-] 192.168.66.201 - - [18/May/2020 13:45:32] "GET /healthcheck HTTP/1.0" 200 137 0.004280 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] The status of volume 9e718bf8-8fc4-42e6-be6e-7507425937d2 is unexpected: status = error_extending, expected = available. 2020-05-18 13:45:32.947 34 ERROR glance_store._drivers.cinder [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to write to volume 9e718bf8-8fc4-42e6-be6e-7507425937d2.: glance_store.exceptions.StorageFull: There is not enough disk space on the image storage media. 2020-05-18 13:45:33.103 34 ERROR glance.api.v2.image_data [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPRequestEntityTooLarge: Image storage media is full: There is not enough disk space on the image storage media. 2020-05-18 13:45:34.026 34 INFO eventlet.wsgi.server [req-e7298df7-ba7e-41ca-9e00-30c74ca489b3 2119753b12fb46ef96dfe741de358ebf 32acd4356aee41cf93f7038e588f714f - default default] 192.168.66.224 - - [18/May/2020 13:45:34] "PUT /v2/images/674edd04-4602-4ee2-b9e8-afaf691cba16/file HTTP/1.1" 413 444 25.434945 ~~~ [2] ~~~ 2020-05-18 13:45:32.196 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extending volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2. 2020-05-18 13:45:32.197 81 DEBUG cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Checking file for resize _resize_image_file /usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py:649 2020-05-18 13:45:32.198 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.353 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C qemu-img info /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2" returned: 0 in 0.155s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.356 81 INFO cinder.volume.drivers.netapp.dataontap.nfs_base [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Resizing file to 2G 2020-05-18 13:45:32.357 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Running cmd (subprocess): qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2020-05-18 13:45:32.399 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] CMD "qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G" returned: 1 in 0.042s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2020-05-18 13:45:32.401 81 DEBUG oslo_concurrency.processutils [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] 'qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G' failed. Not Retrying. execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:457 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager [req-9f08f128-80db-4dcf-a5f2-e034244c154a fe01d20dfa1d49cd9334f378806db021 73dbb175f8314acaa69a33236ba68e0f - default default] Extend volume failed.: cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G Exit code: 1 Stdout: '' Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 801, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self._resize_image_file(path, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 655, in _resize_image_file 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager run_as_root=self._execute_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/image/image_utils.py", line 334, in resize_image 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager utils.execute(*cmd, run_as_root=run_as_root) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 126, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return processutils.execute(*cmd, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py", line 424, in execute 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cmd=sanitized_cmd) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n" 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager During handling of the above exception, another exception occurred: 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Traceback (most recent call last): 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/manager.py", line 2733, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager self.driver.extend_volume(volume, new_size) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 727, in trace_method_logging_wrapper 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager return f(*args, **kwargs) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 807, in extend_volume 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager raise exception.VolumeBackendAPIException(data=exception_msg) 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager cinder.exception.VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Failed to extend volume volume-9e718bf8-8fc4-42e6-be6e-7507425937d2, Error msg: Unexpected error while running command. 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Command: qemu-img resize /var/lib/cinder/mnt/04e1bdb760091e51531be717730c8ab6/volume-9e718bf8-8fc4-42e6-be6e-7507425937d2 2G 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Exit code: 1 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stdout: '' 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager Stderr: "qemu-img: warning: Shrinking an image will delete all data beyond the shrunken image's end. Before performing such an operation, make sure there is no important data there.\nqemu-img: Use the --shrink option to perform a shrink operation.\n". 2020-05-18 13:45:32.404 81 ERROR cinder.volume.manager ~~~
@Rajat: has a bug already been opened to track this downstream? I think this should be fairly easy to backport once it lands upstream.
I do not think we have anything in 16.1 that fixes this issue. I'd suggest going with the workaround for now - this should probably be documented somewhere. I'll talk to Rajat and Abhishek and see if that is something that should happen in the next cycle.
This was discussed during PTG, and will require further discussions. Can we keep this open, even though it will probably not be fixed before the next version?
Hi All, this issue is fixed with the patch[1] with an exception that cinder shouldn't be creating qcow2 volumes (i.e. nfs_qcow2_volumes parameter shouldn't be set in cinder.conf incase of NFS) This temporary exception needs to be fixed on glance store side which I'm currently working on. [1] https://review.opendev.org/c/openstack/cinder/+/761152
Both glance_store patches were merged downstream. The glance patch has not been merged but should only prevent the glance tests from running, and not affect the feature itself.
Verified on: openstack-cinder-18.2.1-0.20220605050357.9a473fd.el9ost.noarch FYI cloned the head of this verification from bz1947283, as it's similar. On a deployment with Glance over Cinder, With Cinder using a generic NFS as it's backend. Lets upload a larger than 1G qcow2 image, We keep Cinder's default nfs_qcow2_volumes=false (overcloud) [stack@undercloud-0 ~]$ qemu-img info windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 image: windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 file format: qcow2 virtual size: 12.2 GiB (13096714240 bytes) disk size: 11.2 GiB cluster_size: 65536 Format specific information: compat: 0.10 compression type: zlib refcount bits: 16 (overcloud) [stack@undercloud-0 ~]$ glance image-create --disk-format qcow2 --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 --name windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 +------------------+----------------------------------------------------------------------------------+ | Property | Value | +------------------+----------------------------------------------------------------------------------+ | checksum | a05ead3a04ae663da77eee5d2cb2fa73 | | container_format | bare | | created_at | 2022-06-21T12:57:22Z | | direct_url | cinder://default_backend/89aa21c9-238a-4996-8da3-1dea959af9f5 | | disk_format | qcow2 | | id | 0485da6b-aa86-4cff-b6f4-5f0f6c67c8f4 | | min_disk | 0 | | min_ram | 0 | | name | windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 | | os_hash_algo | sha512 | | os_hash_value | 9bd12698b1cb46e09243fd5704e14292e7393c84a4de178f536caaf21b9222c94d5080cbec69eafe | | | 69fd7a7694fe14d792425c5fbb89a89727d2d2615e62890a | | os_hidden | False | | owner | effcf85b11f94d7eb99578caf1467e9b | | protected | False | | size | 12001017856 | | status | active | | stores | default_backend | | tags | [] | | updated_at | 2022-06-21T13:01:11Z | | virtual_size | 13096714240 | | visibility | shared | +------------------+----------------------------------------------------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list --all-tenants +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 89aa21c9-238a-4996-8da3-1dea959af9f5 | afdbf0760615482492f97721092f431c | available | image-0485da6b-aa86-4cff-b6f4-5f0f6c67c8f4 | 12 | tripleo | false | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ (overcloud) [stack@undercloud-0 ~]$ cinder show 89aa21c9-238a-4996-8da3-1dea959af9f5 +--------------------------------+--------------------------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-06-21T12:57:25.000000 | | description | None | | encrypted | False | | id | 89aa21c9-238a-4996-8da3-1dea959af9f5 | | metadata | glance_image_id : 0485da6b-aa86-4cff-b6f4-5f0f6c67c8f4 | | | image_owner : effcf85b11f94d7eb99578caf1467e9b | | | image_size : 12001017856 | | | readonly : True | | migration_status | None | | multiattach | False | | name | image-0485da6b-aa86-4cff-b6f4-5f0f6c67c8f4 | | os-vol-host-attr:host | hostgroup@tripleo_nfs#tripleo_nfs |--> saved on NFS backend. | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | afdbf0760615482492f97721092f431c | | readonly | True | | replication_status | None | | size | 12 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2022-06-21T13:01:10.000000 | | user_id | 326bade2d20644f5a76e895b98d7f109 | | volume_type | tripleo | +--------------------------------+--------------------------------------------------------+ As shown above, we successfully upload a large 12G image to Glance where Glance uses Cinder generic NFS as it's backend. We notice the extend operation happen/reported on c-vol logs: - - -] Extend volume completed successfully. /var/log/containers/cinder/cinder-volume.log:3056:2022-06-21 13:01:04.652 11 INFO cinder.volume.drivers.nfs [req-bccf9cfe-c993-4860-a935-0b1b114289ec 326bade2d20644f5a76e895b98d7f109 afdbf0760615482492f97721092f431c - - -] Extending volume 89aa21c9-238a-4996-8da3-1dea959af9f5. /var/log/containers/cinder/cinder-volume.log:3064:2022-06-21 13:01:04.877 11 INFO cinder.volume.manager [req-bccf9cfe-c993-4860-a935-0b1b114289ec 326bade2d20644f5a76e895b98d7f109 afdbf0760615482492f97721092f431c - - -] Extend volume completed successfully. Now lets change nfs_qcow2_volumes=true then try to reupload the same large image: (overcloud) [stack@undercloud-0 ~]$ glance image-create --disk-format qcow2 --container-format bare --file windows_server_2012_r2_standard_eval_kvm_20170321.qcow2 --name windows_server_2012_r2_standard_eval_kvm_20170321.qcow2_NFS_QCOW2_TRUE --progress [=============================>] 100% +------------------+------------------------------------------------------------------------+ | Property | Value | +------------------+------------------------------------------------------------------------+ | checksum | None | | container_format | bare | | created_at | 2022-06-21T13:56:23Z | | disk_format | qcow2 | | id | da20303d-a66a-4138-a954-6520b2b52229 | | min_disk | 0 | | min_ram | 0 | | name | windows_server_2012_r2_standard_eval_kvm_20170321.qcow2_NFS_QCOW2_TRUE | | os_hash_algo | None | | os_hash_value | None | | os_hidden | False | | owner | effcf85b11f94d7eb99578caf1467e9b | | protected | False | | size | None | | status | queued | -> never reaches active state. | tags | [] | | updated_at | 2022-06-21T13:56:23Z | | virtual_size | Not available | | visibility | shared | +------------------+------------------------------------------------------------------------+ HTTP 500 Internal Server Error: The server has either erred or is incapable of performing the requested operation. As expected with nfs_qcow2_volumes=true we fail to upload the image to glance (overcloud) [stack@undercloud-0 ~]$ cinder list --all-tenants +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ | 89aa21c9-238a-4996-8da3-1dea959af9f5 | afdbf0760615482492f97721092f431c | available | image-0485da6b-aa86-4cff-b6f4-5f0f6c67c8f4 | 12 | tripleo | false | | +--------------------------------------+----------------------------------+-----------+--------------------------------------------+------+-------------+----------+-------------+ Only the first image/attempt shows up on Cinder. Looks good to verify a large qcow2 image was uploaded to Glance over Cinder over generic NFS, with the default nfs_qcow2_volumes=false incremental volume extend now works.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 17.0 (Wallaby)), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2022:6543