+++ This bug was initially created as a clone of Bug #1504670 +++
>> Description of problem:
Creating the cinder backup manually, no problem:
cinder --os-tenant-name sandbox create --display_name volume_kris 1
cinder --os-tenant-name sandbox backup-create --display-name back_kris --force 5a06927c-1892-45b8-b8a3-3981dafea875
When we run the below scripts, we have problems:
Creating of 10 volumes
#!/bin/sh
for var in {0..9}
do
cinder --os-tenant-name sandbox create --display_name volume_kris_$var 1
done
Creating 10 backup volumes
#!/bin/sh
i=0
for var in $(cinder --os-tenant-name sandbox list | grep volume_kris_ |awk '{print $2}')
do
cinder --os-tenant-name sandbox backup-create --display-name back_kris_$i --force $var
i=$((i+1))
done
>> Version-Release number of selected component (if applicable):
Openstack 9
openstack-cinder-8.1.1-4.el7ost.noarch Sat Mar 18 03:48:42 2017
python-cinder-8.1.1-4.el7ost.noarch Sat Mar 18 03:42:38 2017
python-cinderclient-1.6.0-1.el7ost.noarch Sat Mar 18 03:37:14 2017
>> How reproducible:
Re-run the above script
Note: It's not all the cinder backups creation that will failed
>> Actual results:
Few cinder backups will not be created getting the "error" or "creating" state.
Expected results:
After the scripts, all the cinders backups are created
Additional info:
After we modified the timeout as below, we got better results.
listen mysql
timeout client 180m
timeout server 180m
This seems to be caused by Cinder's data compression (a CPU intensive operation) during backups being done directly in the greenthread, which would prevent thread switching to other greenthreads.
Given enough greenthreads doing compression they would end up running mostly serially and preventing other threads from running.
Solution would be to run the compression on a native thread so they don't interfere with greenthread switching.
--- Additional comment from Gorka Eguileor on 2017-09-26 08:39:45 EDT ---
Seems to be the same issue as in bz #1403948