Bug 1262129 - PA content RHEL6->RHEL7 - "Cluster & HA" - revisit and suggest clufter, config migration utility or its output directly [NEEDINFO]
PA content RHEL6->RHEL7 - "Cluster & HA" - revisit and suggest clufter, confi...
Status: NEW
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: preupgrade-assistant-el6toel7 (Show other bugs)
6.7
All Linux
high Severity medium
: rc
: ---
Assigned To: pstodulk
Alois Mahdal
: Extras
Depends On: 1056996 1182358
Blocks: 1262271 1278535
  Show dependency treegraph
 
Reported: 2015-09-10 17:41 EDT by Jan Pokorný
Modified: 2017-09-14 08:10 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1056996
: 1262271 (view as bug list)
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
phracek: needinfo? (jpokorny)


Attachments (Terms of Use)

  None (edit)
Description Jan Pokorný 2015-09-10 17:41:23 EDT
Cloning the original bug so as to have new clufter migration considered.
Comments in-line:

+++ This bug was initially created as a clone of Bug #1056996 +++

> On 08/20/2014 07:10 PM, Paul Novarese wrote:
>> On 08/20/2014 05:42 AM, Fabio M. Di Nitto wrote:
>>
>>> We don't support rolling upgrades from 6 to 7 with RHEL-HA.
>>>
>>> There is no plan to support it, in fact it cannot be supported because
>>> the corosync onwire protocol is completely different.
>>
>> But what about a scenario where we do an in-place upgrade from 6->7 of
>> the individual nodes, without the requirement that the cluster remain
>> available during the entire process?  So, an in-place upgrade but NOT
>> rolling?
> 
> That means you are simply taking down the whole cluster, upgrade every
> single node, reconfigure the cluster and then start it.

Sure in-place upgrade is the only thing to consider.
 
> It doesn't help anything from a downtime perspective. It actually makes
> it worst.

Sure uptime has to be sacrificed, to the point "it make take weeks" to
finish the migration.  But this is no worth than offering no upgrade
assitance...

> That is assuming that:
> 
> - you are already running pacemaker on rhel6.

Not necessarily for the clufter use case (see below).

> - we develop a tool to convert cluster.conf to corosync.conf

^ this one was actually based on truth (more inclusively, cluster.conf
  to corosync.conf + possibly rgmanager's to pacemaker's one);

There is now (since 6.7) a tool called clufter [bug 1182358] that
facilitates the upgrade of cluster stacks configuration in a way human
supervision, ladjustments and (importantly) testing of the suggested
cluster deployment is required(!).

Not sure about the optimal extent of the clufter-PA integration, we
definitely should not try to enforce any result coming from clufter,
but we could suggest either:

1. just statically mention there is a possibility to use clufter for
   the task

2. running the sequence of commands involving management CLI for
   the new cluster stack as generated by clufter to some specified
   file -- these commands shall then be run on only a single cluster
   node + all the nodes would require to have pcs installed and
   pcsd.service enabled and started (hence the recipe would be:
   run this set of commands on every node, run the other set of
   commands on arbitrary single node)

3. the contents of corosync.conf and pacemaker configuration
   directly -- the files should either be generated separately on each
   node (provided the same input, also the output should be
   deterministically same)


My suggestion is to stick with 2. when converting from CMAN/rgmanager
configuration (non-empty /cluster/rm tree in cluster.conf and rgmanager
not disabled with a /cluster/rm/@disabled[?]) and 3. otherwise with
proviso that only corosync.conf output is offered.


> - _EVERY_ application in the cluster (Oracle? DB2? etc.) can upgrade too.

True, there is a lot of fragility arising from complex dependencies.

Note You need to log in before you can comment on or make changes to this bug.