Bug 1743326 - OSP 14->15: RabbitMQ fails when upgrading 3rd controller
Summary: OSP 14->15: RabbitMQ fails when upgrading 3rd controller
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: James Slagle
QA Contact: pkomarov
URL:
Whiteboard:
Depends On:
Blocks: 1727807
TreeView+ depends on / blocked
 
Reported: 2019-08-19 16:26 UTC by Jiri Stransky
Modified: 2020-09-30 20:06 UTC (History)
8 users (show)

Fixed In Version: resource-agents-4.1.1-17.el8_0.6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-30 20:06:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jiri Stransky 2019-08-19 16:26:53 UTC
Description of problem:

Upgrade of controller-0 and controller-1 from OSP 14 to 15 seems to go fine (incl. scaling RabbitMQ bundle instances to 2), and when upgrading controller-2, rabbitmq goes to failed state on all 3 controllers.

podman container set: rabbitmq-bundle [192.168.24.1:8787/rhosp15/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0    (ocf::heartbeat:rabbitmq-cluster):      FAILED controller-0 (blocked)
   rabbitmq-bundle-1    (ocf::heartbeat:rabbitmq-cluster):      FAILED controller-1 (blocked)
   rabbitmq-bundle-2    (ocf::heartbeat:rabbitmq-cluster):      FAILED controller-2 (blocked)


Version-Release number of selected component (if applicable):

openstack-tripleo-heat-templates-10.6.1-0.20190815230440.9adae50.el8ost.noarch


How reproducible:

Not sure yet but i've hit it for 2nd time.


Steps to Reproduce:

https://gitlab.cee.redhat.com/osp15/osp-upgrade-el8/blob/872657e762b144a1d296cf1d54c8bac2f62e46e0/README.md

Comment 2 Jiri Stransky 2019-08-19 16:38:45 UTC
Eventually the error seems to go away and RabbitMQ got started on all 3 controllers correctly, but just step 1 of the deployment took 50 minutes to run:

TASK [Start containers for step 1] *********************************************
Monday 19 August 2019  15:31:21 +0000 (0:00:00.344)       0:10:28.521 ********* 
ok: [controller-2] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
ok: [controller-1] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}


TASK [Debug output for task: Start containers for step 1] **********************
Monday 19 August 2019  16:22:53 +0000 (0:51:32.829)       1:02:01.350 *********

Comment 3 Luca Miccini 2019-08-20 08:56:59 UTC
found:

Aug 19 15:37:48 controller-0 pacemaker-controld[50071]: notice: State transition S_IDLE -> S_POLICY_ENGINE
Aug 19 15:37:48 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:0                           (               rabbitmq-bundle-0 )   due to resource definition change
Aug 19 15:37:48 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:1                           (               rabbitmq-bundle-1 )   due to resource definition change
Aug 19 15:37:48 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:2                           (               rabbitmq-bundle-2 )   due to resource definition change
Aug 19 15:37:48 controller-0 pacemaker-schedulerd[50070]: notice: Calculated transition 263, saving inputs in /var/lib/pacemaker/pengine/pe-input-298.bz2
Aug 19 15:37:48 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 locally on rabbitmq-bundle-0
Aug 19 15:37:48 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 on rabbitmq-bundle-1
Aug 19 15:37:48 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 on rabbitmq-bundle-2
Aug 19 15:37:49 controller-0 pacemaker-controld[50071]: notice: Result of notify operation for rabbitmq on rabbitmq-bundle-0: 0 (ok)
Aug 19 15:37:49 controller-0 pacemaker-controld[50071]: notice: Initiating stop operation rabbitmq_stop_0 on rabbitmq-bundle-2
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Transition 263 aborted by deletion of nvpair[@id='status-3-rmq-node-attr-rabbitmq']: Transient attribute change
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: warning: Action 96 (rabbitmq_stop_0) on rabbitmq-bundle-2 failed (target: 0 vs. rc: 1): Error
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Transition 263 (Complete=9, Pending=0, Fired=0, Skipped=0, Incomplete=21, Source=/var/lib/pacemaker/pengine/pe-input-298.bz2): Complete
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq:2 away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:0                           (               rabbitmq-bundle-0 )   due to resource definition change
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:1                           (               rabbitmq-bundle-1 )   due to resource definition change
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_post_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-post_notify_stopped_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_pre_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-pre_notify_stop_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:53 controller-0 pacemaker-schedulerd[50070]: notice: Calculated transition 264, saving inputs in /var/lib/pacemaker/pengine/pe-input-299.bz2
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 locally on rabbitmq-bundle-0
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 on rabbitmq-bundle-1
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Result of notify operation for rabbitmq on rabbitmq-bundle-0: 0 (ok)
Aug 19 15:37:53 controller-0 pacemaker-controld[50071]: notice: Initiating stop operation rabbitmq_stop_0 on rabbitmq-bundle-1
Aug 19 15:37:58 controller-0 pacemaker-controld[50071]: notice: Transition 264 aborted by deletion of nvpair[@id='status-2-rmq-node-attr-rabbitmq']: Transient attribute change
Aug 19 15:37:58 controller-0 pacemaker-controld[50071]: warning: Action 93 (rabbitmq_stop_0) on rabbitmq-bundle-1 failed (target: 0 vs. rc: 1): Error
Aug 19 15:37:58 controller-0 pacemaker-controld[50071]: notice: Transition 264 (Complete=8, Pending=0, Fired=0, Skipped=0, Incomplete=17, Source=/var/lib/pacemaker/pengine/pe-input-299.bz2): Complete
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:1 on rabbitmq-bundle-1: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:1 on rabbitmq-bundle-1: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq:2 away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:0                           (               rabbitmq-bundle-0 )   due to resource definition change
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:1_post_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-post_notify_stopped_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_post_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-post_notify_stopped_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:1_pre_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-pre_notify_stop_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_pre_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-pre_notify_stop_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: notice: Calculated transition 265, saving inputs in /var/lib/pacemaker/pengine/pe-input-300.bz2
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:1 on rabbitmq-bundle-1: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:1 on rabbitmq-bundle-1: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Processing failed stop of rabbitmq:2 on rabbitmq-bundle-2: unknown error
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-1 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-1 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-1 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq-bundle-clone away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq:1 away from rabbitmq-bundle-1 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Forcing rabbitmq:2 away from rabbitmq-bundle-2 after 1000000 failures (max=1000000)
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: notice:  * Restart    rabbitmq:0                           (               rabbitmq-bundle-0 )   due to resource definition change
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:1_post_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-post_notify_stopped_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_post_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-post_notify_stopped_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:1_pre_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-pre_notify_stop_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: warning: Ignoring requirement that rabbitmq:2_pre_notify_stop_0 complete before rabbitmq-bundle-clone_confirmed-pre_notify_stop_0: unmanaged failed resources cannot prevent clone shutdown
Aug 19 15:37:58 controller-0 pacemaker-schedulerd[50070]: notice: Calculated transition 266, saving inputs in /var/lib/pacemaker/pengine/pe-input-301.bz2
Aug 19 15:37:58 controller-0 pacemaker-controld[50071]: notice: Initiating notify operation rabbitmq_pre_notify_stop_0 locally on rabbitmq-bundle-0
Aug 19 15:37:59 controller-0 pacemaker-controld[50071]: notice: Result of notify operation for rabbitmq on rabbitmq-bundle-0: 0 (ok)
Aug 19 15:37:59 controller-0 pacemaker-controld[50071]: notice: Initiating stop operation rabbitmq_stop_0 locally on rabbitmq-bundle-0
Aug 19 15:38:04 controller-0 rabbitmq-cluster(rabbitmq)[202299]: ERROR: Unexpected return code from '/usr/sbin/rabbitmqctl cluster_status' exit code: 64
Aug 19 15:38:04 controller-0 pacemaker-controld[50071]: notice: Transition 266 aborted by deletion of nvpair[@id='status-1-rmq-node-attr-rabbitmq']: Transient attribute change
Aug 19 15:38:04 controller-0 rabbitmq-cluster(rabbitmq)[202309]: INFO: rabbitmq-server stop failed: 1
Aug 19 15:38:04 controller-0 pacemaker-controld[50071]: notice: Result of stop operation for rabbitmq on rabbitmq-bundle-0: 1 (unknown error)
Aug 19 15:38:04 controller-0 pacemaker-controld[50071]: notice: rabbitmq-bundle-0-rabbitmq_stop_0:214 [ ok\nStopping and halting node rabbit@controller-0 ...\n ]
Aug 19 15:38:04 controller-0 pacemaker-controld[50071]: warning: Action 90 (rabbitmq_stop_0) on rabbitmq-bundle-0 failed (target: 0 vs. rc: 1): Error
Aug 19 15:38:04 controller-0 pacemaker-controld[50071]: notice: Transition 266 (Complete=13, Pending=0, Fired=0, Skipped=0, Incomplete=7, Source=/var/lib/pacemaker/pengine/pe-input-301.bz2): Complete

seems like rabbit doesn't stop properly?

More @ http://pastebin.test.redhat.com/789927.

Comment 5 Jiri Stransky 2019-08-23 10:14:36 UTC
With the updated resource agents it fails on Galera when scaling to 2 controllers, i never hit this before:

 podman container set: galera-bundle [192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest]
   galera-bundle-0      (ocf::heartbeat:galera):        Slave controller-0
   galera-bundle-1      (ocf::heartbeat:galera):        FAILED Master controller-1 (blocked)
 podman container set: rabbitmq-bundle [192.168.24.1:8787/rhosp15/openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0    (ocf::heartbeat:rabbitmq-cluster):      Started controller-0
   rabbitmq-bundle-1    (ocf::heartbeat:rabbitmq-cluster):      Started controller-1
 podman container set: redis-bundle [192.168.24.1:8787/rhosp15/openstack-redis:pcmklatest]
   redis-bundle-0       (ocf::heartbeat:redis): Master controller-0
   redis-bundle-1       (ocf::heartbeat:redis): Slave controller-1
 ip-192.168.24.8        (ocf::heartbeat:IPaddr2):       Started controller-0
 ip-10.0.0.106  (ocf::heartbeat:IPaddr2):       Started controller-0
 ip-172.17.1.16 (ocf::heartbeat:IPaddr2):       Started controller-0
 ip-172.17.1.23 (ocf::heartbeat:IPaddr2):       Started controller-0
 ip-172.17.3.11 (ocf::heartbeat:IPaddr2):       Started controller-0
 ip-172.17.4.25 (ocf::heartbeat:IPaddr2):       Started controller-0
 podman container set: haproxy-bundle [192.168.24.1:8787/rhosp15/openstack-haproxy:pcmklatest]
   haproxy-bundle-podman-0      (ocf::heartbeat:podman):        Started controller-0
   haproxy-bundle-podman-1      (ocf::heartbeat:podman):        Started controller-1
   haproxy-bundle-podman-2      (ocf::heartbeat:podman):        Stopped
 podman container: openstack-cinder-volume [192.168.24.1:8787/rhosp15/openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-podman-0     (ocf::heartbeat:podman):        Started controller-0

Failed Resource Actions:
* galera_monitor_10000 on galera-bundle-0 'unknown error' (1): call=305, status=complete, exitreason='local node <controller-0> is started, but not in primary mode. Unknown state.',
    last-rc-change='Fri Aug 23 09:43:09 2019', queued=0ms, exec=0ms
* galera_promote_0 on galera-bundle-1 'unknown error' (1): call=171, status=complete, exitreason='MySQL server failed to start (pid=728) (rc=0), please check your installation',
    last-rc-change='Fri Aug 23 09:42:48 2019', queued=0ms, exec=7174ms


[root@controller-0 ~]# rpm -q resource-agents
resource-agents-4.1.1-17.el8_0.4.x86_64

The upgrade is still running but it's been stuck for 30 minutes now, probably on this blocked scale up. I'm keeping the env as-is for inspection.

Comment 6 Damien Ciabrini 2019-08-23 16:44:29 UTC
Thanks for the env Jiri,

I can see two distinct errors in this deployment:

1. The galera resource managed by pacemaker is unexpectedly stopped by a transient paunch container.

During upgrade, there's a transient container that runs a temporary mysql server to upgrade the format of the DB on disk. That container is meant to be run only once, and while no container is running. However on the end, I see that pacemaker has already restarted galera and that the container is re-run at an unexpected time:
 
srp 23 09:28:51 controller-0 ansible-command[323160]: Invoked with _raw_params=podman run --rm --log-driver=syslog -u root --net=host -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -v /etc/hosts:/etc/hosts:ro -v /etc/l\ocaltime:/etc/localtime:ro -v /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro -v /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro -v /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/\certs/ca-bundle.crt:ro -v /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro -v /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro -v /dev/log:/dev/log -v /etc/ssh/ssh_known_hosts:/etc\/ssh/ssh_known_hosts:ro -v /etc/puppet:/etc/puppet:ro -v /var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json -v /var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_file\s/src:ro -v /var/lib/mysql:/var/lib/mysql -v /tmp/mariadb-upgrade:/var/log/mariadb:rw "192.168.24.1:8787/rhosp15/openstack-mariadb:pcmklatest" /bin/bash -ecx "kolla_set_configs; chown -R mysql:mysql /var/lib/my\sql; chown -R mysql:mysql /var/log/mariadb; mysqld_safe --user=mysql --wsrep-provider=none --skip-networking --wsrep-on=off & timeout 60 sh -c 'while ! mysqladmin ping --silent; do sleep 1; done'; mysql_upgrade\; mysqladmin shutdown" _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None

The script is not sufficiently robust, so it correctly continue up the the "mysqladmin shutdown" action, which makes the pacemaker-manage galera server stop.

2019-08-23  9:28:52 0 [Note] /usr/libexec/mysqld (initiated by: root[root] @ localhost []): Normal shutdown

and makes pacemaker unhappy:

srp 23 09:28:54 controller-0 galera(galera)[323478]: ERROR: Unable to retrieve wsrep_cluster_status, verify check_user 'clustercheck' has permissions to view status
srp 23 09:28:54 controller-0 galera(galera)[323483]: ERROR: local node <controller-0> is started, but not in primary mode. Unknown state.

Eventually, that error is recovered though. this is not the blocking error, although we should fix it shortly.


2. Joining nodes refuse to synchronize against the cluster because they think they have more recent data.


Later on, controller-1 is added in the cluster:

srp 23 09:37:07 controller-0 pacemaker-controld[57750]: notice: Node controller-1 state is now member

And the galera resource is stopped entirely, for a reason that is not relevant here.

srp 23 09:41:23 controller-0 pacemaker-schedulerd[57749]: warning: Processing failed monitor of galera:0 on galera-bundle-0: unknown error
srp 23 09:41:23 controller-0 pacemaker-schedulerd[57749]: notice:  * Stop       galera-bundle-podman-0               (                    controller-0 )   due to node availability
srp 23 09:41:23 controller-0 pacemaker-schedulerd[57749]: notice:  * Stop       galera-bundle-0                      (                    controller-0 )   due to node availability
srp 23 09:41:23 controller-0 pacemaker-schedulerd[57749]: notice:  * Stop       galera:0                             (          Master galera-bundle-0 )   due to node availability

(Note: the "Processing failed monitor" warning comes from the previous error, so it's harmless)

Eventually, pacemaker bootstraps a new cluster, and chooses controller-0 as a the bootstrap node:

srp 23 09:41:53 controller-0 galera(galera)[394705]: INFO: Galera started
srp 23 09:41:53 controller-0 pacemaker-controld[57750]: notice: Result of promote operation for galera on galera-bundle-0: 0 (ok)

And tells controler-1 to rejoin the galera cluster. But when galera is start on controller-1, it refuses to rejoin and stops in error, because it thinks it has more recent data than controller-0.

2019-08-23  9:42:54 0 [ERROR] WSREP: gcs/src/gcs_group.cpp:group_post_state_exchange():321: Reversing history: 631252 -> 543483, this member has applied 87769 more events than the primary component.Data loss is possible. Aborting.

controller-0 refuses to stay in primary galera component as a reaction, so pacemaker has to restart it as well.

2019-08-23  9:42:59 0 [Note] WSREP: evs::proto(5ab6d537, OPERATIONAL, view_id(REG,5ab6d537,2)) suspected node without join message, declaring inactive
2019-08-23  9:43:00 0 [Note] WSREP: view(view_id(NON_PRIM,5ab6d537,2) memb {

At this step, galera is unmanaged on controller-1, so pacemaker won't be able to bootstrap the cluster anymore, so the DB resource will be unavailable to OpenStack.

That's the first time I see a node refusing to rejoin an existing cluster, could this indicate that the DB save/restore step in the upgrade wasn't run on the node with the most recent data?

Comment 7 Jiri Stransky 2019-08-28 13:13:36 UTC
The Galera issues mentioned in comments #5 and #6 seem to be fixed by https://review.opendev.org/#/c/678843/.

For the RabbitMQ issue, i've tested with the updated resource-agents-4.1.1-17.el8_0.6.x86_64 and the failure didn't appear anymore.

Comment 8 Jiri Stransky 2019-09-11 14:10:39 UTC
Correction: probably only issue (1) in comment #6 is addressed by that patch, issue (2) might still need a separate BZ perhaps?

Comment 9 Jiri Stransky 2019-09-18 13:27:17 UTC
Reported https://bugzilla.redhat.com/show_bug.cgi?id=1753246 for the galera desync issue. Here setting modified with the respective resource agents package which seems to have fixed the Rabbit problem.

Comment 11 Jon Schlueter 2019-09-19 16:10:18 UTC
resource-agents-4.1.1-17.el8_0.6

Comment 17 pkomarov 2019-10-23 23:32:34 UTC
Verified , 

Via automation : 

http://staging-jenkins2-qe-playground.usersys.redhat.com/view/DFG/view/upgrades/view/upgrade/job/DFG-upgrades-upgrade-upgrade-14-15_director-rhel-virthost-3cont_2comp-ipv4-vxlan-poc/87/

deploy logs at : 
http://staging-jenkins2-qe-playground.usersys.redhat.com/view/DFG/view/upgrades/view/upgrade/job/DFG-upgrades-upgrade-upgrade-14-15_director-rhel-virthost-3cont_2comp-ipv4-vxlan-poc/87/artifact/undercloud-0.tar.gz

#check openstack-tripleo-heat-templates version:
undercloud-0]$ grep openstack-tripleo-heat-templates-10 var/log/rpm.list
openstack-tripleo-heat-templates-10.6.2-0.20190923210442.7db107a.el8ost.noarch

#check resource-agents version:
[r@r undercloud-0]$ grep -q "Installed: resource-agents-4.1.1-17.el8_0.6.x86_64" home/stack/overcloud_upgrade_run_controller-2.log&& echo "resource-agents-4.1.1-17.el8_0.6.x86_64 installed on overcloud "
resource-agents-4.1.1-17.el8_0.6.x86_64 installed on overcloud

Comment 18 stchen 2020-09-30 20:06:09 UTC
Closing EOL, OSP 15 has been retired as of Sept 19, 2020


Note You need to log in before you can comment on or make changes to this bug.