This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2207696 - Corosync hosts frequently lose connection to peers on Azure VMs.
Summary: Corosync hosts frequently lose connection to peers on Azure VMs.
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: corosync
Version: 8.6
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jan Friesse
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-16 14:58 UTC by Gerry Sommerville
Modified: 2023-09-22 20:15 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-22 20:15:50 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-7713 0 None Migrated None 2023-09-22 20:15:44 UTC
Red Hat Issue Tracker RHELPLAN-157389 0 None None None 2023-05-16 15:00:13 UTC

Description Gerry Sommerville 2023-05-16 14:58:10 UTC
Description of problem:
Microsoft Azure documentation states that the totem token in the Corosync configuration file should be set to 30000 to allow for memory preserving maintenance.

https://learn.microsoft.com/en-us/azure/sap/workloads/high-availability-guide-rhel-pacemaker?tabs=msi

Sometimes we still see Corosync losing connection to its peers even with the 30000 token setting. However, from the Corosync log it looks like its only waiting 10 second before forming new membership

Jan 27 02:48:49.832 [14503] <Hostname> corosync notice  [TOTEM ] totemsrp.c:timer_function_orf_token_warning:1730 Token has not been received in 7500 ms
Jan 27 02:48:52.332 [14503] <Hostname> corosync notice  [TOTEM ] totemsrp.c:timer_function_orf_token_timeout:1746 A processor failed, forming new configuration.
Jan 27 02:48:57.800 [14503] <Hostname> corosync info    [KNET  ] libknet.h:log_deliver_fn:682 rx: host: 1 link: 0 is up
Jan 27 02:48:57.800 [14503] <Hostname> corosync info    [KNET  ] libknet.h:log_deliver_fn:682 host: host: 1 (passive) best link: 0 (pri: 1)
Jan 27 02:49:04.337 [14503] <Hostname> corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2096 A new membership (2.93) was formed. Members left: 1
Jan 27 02:49:04.337 [14503] <Hostname> corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:2101 Failed to receive the leave message. failed: 1
Jan 27 02:49:04.337 [14503] <Hostname> corosync notice  [QUORUM] vsf_quorum.c:log_view_list:131 Members[1]: 2
Jan 27 02:49:04.337 [14503] <Hostname> corosync notice  [MAIN  ] main.c:corosync_sync_completed:296 Completed service synchronization, ready to provide service.

For reference here is corosync.conf, and corosync_cmapctl output.
corosync.conf
totem {
     version: 2
     cluster_name: <HA Cluster>
     transport: knet
     token: 30000
     crypto_cipher: aes256
     crypto_hash: sha256
}

From corosync_cmapctl
runtime.config.totem.token (u32) = 30000            
runtime.config.totem.token_retransmit (u32) = 7142
runtime.config.totem.token_retransmits_before_loss_const (u32) = 4
runtime.config.totem.token_warning (u32) = 75

*Edit*
Based on the above I have the following questions:
1. How can I be sure that Corosync is honoring the 30 seconds token timeout?
2. Are there any additional Corosync (or Pacemaker) configurations/workarounds recommended for Azure cloud? Any known problems with Corosync/Pacemaker on Azure?


Version-Release number of selected component (if applicable):
corosync-3.0.4-2

How reproducible:
Not reproducible on demand.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Fabio Massimo Di Nitto 2023-05-16 15:08:25 UTC
Hi Gerry,

can you please enable debug logging in corosync and at the next event collect the logs?

I have run many clusters in Azure, but none have shown those issues, even under load.

If possible could you share your deployment configuration in Azure? how many nodes? region? image you used to deploy etc? Then I can try to reproduce the problem in the exact similar environment.

Thanks
Fabio

Comment 2 Gerry Sommerville 2023-05-19 16:45:27 UTC
Hey Fabio,

Sorry for the delay.

FYI I noticed this problem as part of a historical review of past Db2 customer cases, where connectivity is lost without any other indication of network problems. Unfortunately I do not have direct access to the clusters that have hit this issue, so I do not have details about the VMs in those clusters. In general, these clusters are two-node clusters with Azure fencing configured. I believe many of these clusters are deployed in European regions based on the customers location, but I have no conclusive diagnostics stating the exact region. For these reasons, I was looking for more general guidance/suggestions for running Pacemaker/Corosync clusters on Azure in addition to question 1 above.

Changing the log level in Corosync.conf sounds doable, is there any impact/considerations to keeping debug logging on for a long period of time (months to years)?

You mention you have run clusters in Azure under load without encountering any problems. Have you kept these systems running for long periods of time, such as 12-18 months? Wonder if you could provide your system details which I can then use to compare against clusters which hit this issue in the future. Also do you configure anything from the Azure side to manage VM maintenance windows? It sounds like I will end up reading more about working in Azure to debug/mitigate this issue in the future

For now (I might be jumping the gun here) I suspect this pause maintenance from Azure could be the culprit. Specifically this paragraph from the following link.
https://learn.microsoft.com/en-us/azure/virtual-machines/maintenance-and-updates#maintenance-that-doesnt-require-a-reboot

"When VM impacting maintenance is required it will almost always be completed through a VM pause for less than 10 seconds. In rare circumstances, no more than once every 18 months for general purpose VM sizes, Azure uses a mechanism that will pause the VM for about 30 seconds. After any pause operation the VM clock is automatically synchronized upon resume."

Gerry

Comment 3 Fabio Massimo Di Nitto 2023-05-22 13:24:12 UTC
(In reply to Gerry Sommerville from comment #2)
> Hey Fabio,
> 
> Sorry for the delay.

No worries.

> 
> FYI I noticed this problem as part of a historical review of past Db2
> customer cases, where connectivity is lost without any other indication of
> network problems. Unfortunately I do not have direct access to the clusters
> that have hit this issue, so I do not have details about the VMs in those
> clusters. In general, these clusters are two-node clusters with Azure
> fencing configured. I believe many of these clusters are deployed in
> European regions based on the customers location, but I have no conclusive
> diagnostics stating the exact region. For these reasons, I was looking for
> more general guidance/suggestions for running Pacemaker/Corosync clusters on
> Azure in addition to question 1 above.

I understand and if we need to do proper debugging, we will need those information.

> 
> Changing the log level in Corosync.conf sounds doable, is there any
> impact/considerations to keeping debug logging on for a long period of time
> (months to years)?

No issue to keep it on, worse case scenario it will use a bit more disk space for logging.

> 
> You mention you have run clusters in Azure under load without encountering
> any problems. Have you kept these systems running for long periods of time,
> such as 12-18 months?

No, generally I test for 24/48 hours.

> Wonder if you could provide your system details which
> I can then use to compare against clusters which hit this issue in the
> future.

I am using the latest RHEL8 or RHEL9 images, deploy clusters with 4/8 nodes, run tests and then destroy them.
Usually I run on eastus-1.

> Also do you configure anything from the Azure side to manage VM
> maintenance windows?

No I do all in-house.

> It sounds like I will end up reading more about working
> in Azure to debug/mitigate this issue in the future

Probably :)

> 
> For now (I might be jumping the gun here) I suspect this pause maintenance
> from Azure could be the culprit. Specifically this paragraph from the
> following link.
> https://learn.microsoft.com/en-us/azure/virtual-machines/maintenance-and-
> updates#maintenance-that-doesnt-require-a-reboot

That sounds very plausible. Also any kind of disruption in their internal network can cause similar issues, tho we have no access to their logs. 

> 
> "When VM impacting maintenance is required it will almost always be
> completed through a VM pause for less than 10 seconds. In rare
> circumstances, no more than once every 18 months for general purpose VM
> sizes, Azure uses a mechanism that will pause the VM for about 30 seconds.
> After any pause operation the VM clock is automatically synchronized upon
> resume."

I guess the best bet would be to check their scheduled maintenance windows with the time of the event.

Cheers
Fabio

Comment 5 RHEL Program Management 2023-09-22 20:15:08 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 6 RHEL Program Management 2023-09-22 20:15:50 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.