Description of problem: When attempting to create multiple volumes on 3par FC backend from image and then booting them, some of these volumes fail to create and the following error is seen in /var/log/cinder/volume.log: volume.log:2016-08-19 12:04:51.164 17717 ERROR oslo_messaging.rpc.dispatcher [req-b80c4b88-5d10-488c-822f-bd3d784296d3 - - - - -] Exception during message handling: Failed to copy image to volume: Bad or unexpected response from the storage volume backend API: Unable to fetch connection information from backend: Conflict (HTTP 409) 73 - host WWN/iSCSI name already used by another host Error seen in compute.log on the compute node: 2016-08-17 16:29:31.315 4739 ERROR nova.volume.cinder [req-5c004551-d8ab-47dd-ba7a-adeae1ce94dc 21197088380649fc8ef8b60dc5268121 75fd30385ca8468ab082a8796ff995b1 - - -] Connection between volume <uuid> and host <compute> might have succeeded, but attempt to terminate connection has failed. Validate the connection and determine if manual cleanup is needed. Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-4ac3820f-2d2b-49aa-838c-58fe671bcf7a) Code: 500. Version-Release number of selected component (if applicable): hp3parclient-3.3.2-py2.7.egg python-cinderclient-1.5.0-1.el7ost.noarch python-cinder-7.0.2-2.el7ost.noarch openstack-cinder-7.0.2-2.el7ost.noarch How reproducible: Frequently with concurrent creates. The last test was done with 4 volume creates concurrently, followed by 3 more separate sets of 4, for 16 total. 13 were successful, 3 were not. Steps to Reproduce: 1. Create volumes from image 2. Attempt to create instances from each of the volumes 3. Move on to the next group Actual results: Some of the instances fail to create as a result of the volumes not creating. Expected results: All of the volumes and instances create successfully Additional info: This looks very similar to the issue described upstream here: https://bugs.launchpad.net/cinder/+bug/1597454
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2050.html