Bug 1132722

Summary: Cinder Volume HA, active/active should be avoided
Product: Red Hat OpenStack Reporter: Jon Bernard <jobernar>
Component: openstack-foreman-installerAssignee: Jiri Stransky <jstransk>
Status: CLOSED ERRATA QA Contact: Leonid Natapov <lnatapov>
Severity: high Docs Contact:
Priority: high    
Version: Foreman (RHEL 6)CC: aberezin, dnavale, fdinitto, gfidente, jguiditt, jprovazn, jstransk, mburns, mfuruta, morazi, oblaut, racedoro, rhos-maint, roxenham, scohen, yeylon, ykawada
Target Milestone: z1Keywords: Triaged
Target Release: Installer   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-foreman-installer-2.0.24-1.el6ost Doc Type: Bug Fix
Doc Text:
When the openstack-cinder-volume service goes down on one of the High Availability controller nodes, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again. With this fix, you can set the 'host' configuration option in the cinder.conf file to the same value on all the controller nodes and switch the openstack-cinder-volume option from active/active to active/passive. As a result, when the controller node which is running the openstack-cinde-volume service goes down, the service starts on another controller node and this node takes over the management of the Block Storage volumes.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-10-02 12:56:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jon Bernard 2014-08-21 22:35:40 UTC
This may already be understood, but I have just recently reached an understanding of Cinder's HA status and felt it should be raised here.

Using pacemaker, cinder-volume can be configured as active/passive using a shared storage backend and this configuration should work as expected.

It was raised in this bug:

    https://bugs.launchpad.net/cinder/+bug/1322190

That attempting an active/active configuration where multiple cinder-volume instances that share the same host setting and operate together on the same shared storage should not be relied on and may stop working (depending on the backend driver).  It is possible for the nodes to be out of sync on the current status of a particular resource or volume - and this could lead to problems.

I think this may be relevant to Staypuft's HA deployment configuration.

Comment 9 Jiri Stransky 2014-08-28 16:28:32 UTC
Pull request upstream:

https://github.com/redhat-openstack/astapor/pull/361


Detailed description of how i tested the fix with NFS backend:

https://github.com/redhat-openstack/astapor/pull/361#issuecomment-53748283

Comment 11 Mike Orazi 2014-09-12 16:50:20 UTC
Would like to pull in Fabio for review and make sure the HOWTOs are updated as well if this is needed.

Comment 15 Jiri Stransky 2014-09-16 14:22:08 UTC
Merged upstream.

Comment 18 Leonid Natapov 2014-09-22 13:59:17 UTC
openstack-foreman-installer-2.0.24-1.el6ost.noarch.

cinder volume is A/P.
openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started mac848f69fbc643.example.com

Comment 20 Scott Lewis 2014-10-02 12:56:31 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1350.html