Hide Forgot
Description of problem: We can spawn instance using image or a bootable volume. In case when image is raw or when bootable volume is created from raw image, parent/child relation is created with clone. Spawned instance is giving half performance in comparison to instances which are spawned without having such parent/child relation like instances spawned using qcow2 image or bootable volume created from qcow2 image. We should have option which allow us to not create such P/C relation while spawn instances using image or when creating bootable volume from raw image. We have the option "rbd_flatten_volume_from_snapshot" for choosing P/C relation or not while creating cinder volume from a snapshot. upstream RFE [1] opened for similar request but it was abandoned. [1] https://bugs.launchpad.net/cinder/+bug/1374256 Version-Release number of selected component (if applicable): RHEL OSP 8 How reproducible: Everytime. Steps to Reproduce: 1. Create an instance using raw image or create a bootable volume from raw image. 2. Create an instance using qcow2 image or create a bootable volume from qcow2 image. 3. Performed the test using dd command on both instances. Instance created in Step "2" giving twice performance in comparison to instance spawned in Step "1". Actual results: No option to break the P/C relation for instances spawned in Step 1 Expected results: We should have the option to break the P/C relation for instances spawned in Step 1 Additional info:
Targeting to OSP12, but put on the OSP11 planning sheet to review. Ocata will be a short cycle and there are already several high priority cinder features being targeted, but we will review this one too. Definitely will only help to push on or contribute an upstream blueprint.
Tzach, Please try to reproduce in OSP12 and report actual status. Thanks Sean
Reclassifying this as a bug from a feature.
On a packstack Pike system. I booted two instance from same rhel7.3 cloud image, uploaded to Glance twice, raw and qcow2. Boot two instance same flavor one from image image. Used this command to get stats #dd bs=2M count=512 if=/dev/zero of=oneGfile conv=fdatasync Ran 3 times on each instance below average for 1G file. Raw based: 4.63120 s 235 MB/s QCOW2 based: 3.52869 s 304.3 MB/s Yes QCOW2 is faster by ~30% in write speed (MB/s) Let me know if this is sufficient stats. Or if I should try with larger files or some other dd options.