Bug 1102057

Summary: If rdma transport is used in cluster.conf, cman is unable to start after shutdown.
Product: Red Hat Enterprise Linux 6 Reporter: Jan Friesse <jfriesse>
Component: corosyncAssignee: Jan Friesse <jfriesse>
Status: CLOSED CURRENTRELEASE QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.7CC: ccaulfie, cluster-maint, fdinitto, jfriesse, jkortus, rpeterso, teigland, zheka
Target Milestone: rcKeywords: OtherQA
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1001210 Environment:
Last Closed: 2015-08-11 13:49:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 1 Jan Friesse 2014-06-02 09:12:46 UTC
(In reply to Yevheniy Demchenko from comment #47)
> (In reply to Jan Friesse from comment #44)
> 
> > > I could do some real-life tests for you, but i'm almost sure, that corosync
> > > won't work in IB if it hits the message larger than the active port mtu.
> > 
> > I'm not sure that I understand you at this point. Corosync internally
> > fragments (and assembles if needed) packets. So it should be able to
> > send/receive packet of arbitrary length. Problem is, if configured_mtu >
> > port_mtu.
>  
> Sorry, i didn't express myself clear, corosync does fragment _messages_ in
> upper layers so that _pakets_ won't exceed configured net mtu, problems will
> occur if the paket (not message) size is larger then the active port mtu.
> 

exactly.

> > Anyway. To have something into release. I will change constant in
> > "instance->totem_config->net_mtu + 160" to 98 as you proposed. We cannot
> > change structures of sent packets in flatiron anyway, so this will be just
> > MAGIC constant (and hopefully makes iba work almost correcly).
> > 
> 
> I'm not a C guru, so i wonder, maybe it makes sense to determine precisely
> the size of (struct mcast) at configure time and pass it to totemiba.c as
> -Dconstant? 

That would be possible (even little hacky), but it doesn't solve problems with > 2 rings. These are not supported (for now) but we may want to support them later and then this solution would completely fail.

> 
> > I will open new BZ for 6.7 as a next step and I will try to solve that
> > problem somehow systematically.
> 
> We have some ideas about better mtu handling, but they require changes in
> other parts of corosync code.

Cool. Can you please share them?

Regards,
  Honza



> 
> > 
> > Thanks again,
> >   Honza

Comment 2 Jan Friesse 2014-11-18 09:22:34 UTC
Yevheniy,
as 6.6 is out with all patches from https://bugzilla.redhat.com/show_bug.cgi?id=1001210, do you plan/want to make additional changes into RDMA transport in corosync?

Comment 3 Jan Friesse 2014-12-08 15:33:18 UTC
Yevheniy,
any news/opinions for this BZ?

Kind regards,
  Honza

Comment 5 Yevheniy Demchenko 2015-01-26 18:20:01 UTC
Jan,
for now we are delaying our work on IB in corosync. The reason is, that despite reasonably good IB support in corosync itself, dlm still uses ipoib communication layer. While it is not a problem in single channel configurations, any redundant network configuration in corosync switches dlm to SCTP instead of TCP. SCTP drastically slows down clustered FS, making them unusable for our projects. As we cannot rely on single network channel in our clusters, the only solution for us is to use iboib bonding instead of native RDMA and corosync network redundance. On the other hand, patched RDMA in corosync seems to work flawlessly on our test systems, and if no one comes with some another bug, i see no reason to make any additional changes to IB code right now.
Regards,
Yevheniy.
P.S. Sorry for delayed reaction

Comment 6 Jan Friesse 2015-02-27 15:50:49 UTC
RHEL 6.7 capacity constrained. Moving to 6.8.

Comment 7 Jan Friesse 2015-08-11 13:49:10 UTC
IB support in corosync seems to be good enough for reporter, so closing this bug.