We will verify this bug so I am taking it. to verify: 1. run the original script booting instances from the volume 2. make sure that only one instance move to available while the rest move to ERROR. 3. make sure the volume is not damaged (detach/reattach the volume) 4. run the same script but instead of booting instances from the volume - attach the volume to several running instances.
Dafna - Comment 1 sounds like a good plan to test it. Attaching volumes should not really be part of this bug, as the issue was with how we handled rescheduling on failures. But I do urge you to test attaching as well and possibly report a different bug.
Moving back to 4.0 as this is indeed fixed in 4.0
verified on: python-novaclient-2.15.0-4.el6ost.noarch openstack-nova-conductor-2013.2.3-7.el6ost.noarch openstack-nova-scheduler-2013.2.3-7.el6ost.noarch openstack-nova-common-2013.2.3-7.el6ost.noarch openstack-nova-api-2013.2.3-7.el6ost.noarch openstack-nova-console-2013.2.3-7.el6ost.noarch openstack-nova-network-2013.2.3-7.el6ost.noarch openstack-nova-cert-2013.2.3-7.el6ost.noarch python-nova-2013.2.3-7.el6ost.noarch openstack-nova-compute-2013.2.3-7.el6ost.noarch openstack-nova-novncproxy-2013.2.3-7.el6ost.noarch python-cinderclient-1.0.7-2.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0578.html