Bug 1132722 - Cinder Volume HA, active/active should be avoided
Summary: Cinder Volume HA, active/active should be avoided
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-foreman-installer
Version: Foreman (RHEL 6)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: Installer
Assignee: Jiri Stransky
QA Contact: Leonid Natapov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-21 22:35 UTC by Jon Bernard
Modified: 2019-02-15 13:44 UTC (History)
17 users (show)

Fixed In Version: openstack-foreman-installer-2.0.24-1.el6ost
Doc Type: Bug Fix
Doc Text:
When the openstack-cinder-volume service goes down on one of the High Availability controller nodes, the Block Storage volumes which were created using the openstack-cinder-volume service cannot be managed until the service comes up again. With this fix, you can set the 'host' configuration option in the cinder.conf file to the same value on all the controller nodes and switch the openstack-cinder-volume option from active/active to active/passive. As a result, when the controller node which is running the openstack-cinde-volume service goes down, the service starts on another controller node and this node takes over the management of the Block Storage volumes.
Clone Of:
Environment:
Last Closed: 2014-10-02 12:56:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1322190 0 None None None Never
Red Hat Product Errata RHBA-2014:1350 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Bug Fix Advisory 2014-10-01 17:22:34 UTC

Description Jon Bernard 2014-08-21 22:35:40 UTC
This may already be understood, but I have just recently reached an understanding of Cinder's HA status and felt it should be raised here.

Using pacemaker, cinder-volume can be configured as active/passive using a shared storage backend and this configuration should work as expected.

It was raised in this bug:

    https://bugs.launchpad.net/cinder/+bug/1322190

That attempting an active/active configuration where multiple cinder-volume instances that share the same host setting and operate together on the same shared storage should not be relied on and may stop working (depending on the backend driver).  It is possible for the nodes to be out of sync on the current status of a particular resource or volume - and this could lead to problems.

I think this may be relevant to Staypuft's HA deployment configuration.

Comment 9 Jiri Stransky 2014-08-28 16:28:32 UTC
Pull request upstream:

https://github.com/redhat-openstack/astapor/pull/361


Detailed description of how i tested the fix with NFS backend:

https://github.com/redhat-openstack/astapor/pull/361#issuecomment-53748283

Comment 11 Mike Orazi 2014-09-12 16:50:20 UTC
Would like to pull in Fabio for review and make sure the HOWTOs are updated as well if this is needed.

Comment 15 Jiri Stransky 2014-09-16 14:22:08 UTC
Merged upstream.

Comment 18 Leonid Natapov 2014-09-22 13:59:17 UTC
openstack-foreman-installer-2.0.24-1.el6ost.noarch.

cinder volume is A/P.
openstack-cinder-volume	(systemd:openstack-cinder-volume):	Started mac848f69fbc643.example.com

Comment 20 Scott Lewis 2014-10-02 12:56:31 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1350.html


Note You need to log in before you can comment on or make changes to this bug.