Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1261410

Summary: Document connecting LVS VIP from LVS node/real server (Load Balancer)
Product: Red Hat Enterprise Linux 7 Reporter: Marko Myllynen <myllynen>
Component: doc-Load_Balancer_AdministrationAssignee: Steven J. Levine <slevine>
Status: CLOSED CURRENTRELEASE QA Contact: Brandon Perkins <bperkins>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 7.1CC: cluster-maint, myllynen, rohara
Target Milestone: rcKeywords: Documentation, Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-13 22:20:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Marko Myllynen 2015-09-09 09:23:53 UTC
Description of problem:
With the following trivial configuration

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.122.222
    }
}

virtual_server 192.168.122.222 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.122.119 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
        }
    }
}

when trying to contact the HTTP service on LVS VIP from a host (.185 below) on the same subnet but not part of the LVS setup it works as expected but when trying to connect the LVS VIP from the LVS node or the real server, connection attempt fails as the connection is in SYN_RECV state in IPVS:

# ipvsadm -lnc
IPVS connection entries
pro expire state       source             virtual            destination
TCP 01:31  FIN_WAIT    192.168.122.185:46905 192.168.122.222:80 192.168.122.119:80
TCP 00:46  SYN_RECV    192.168.122.119:48096 192.168.122.222:80 192.168.122.119:80
TCP 00:46  SYN_RECV    192.168.122.222:55727 192.168.122.222:80 192.168.122.119:80

Is this a bug or a feature? Is there any way around this?

Version-Release number of selected component (if applicable):
keepalived-1.2.13-6.el7.x86_64
kernel-3.10.0-229.11.1.el7.x86_64

Comment 2 Marko Myllynen 2015-09-09 10:15:37 UTC
tcpdump shows packets in/out on both nodes when doing lynx -dump 192.168.122.222 from the realserver. 52:54:00:43:8a:90 is realserver and 52:54:00:fb:64:d2 is the LVS node.

LVS node:

# tcpdump -envi any port 80
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
13:09:59.393637  In 52:54:00:43:8a:90 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44397, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x99a9 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7406558 ecr 0,nop,wscale 7], length 0
13:09:59.393706 Out 52:54:00:fb:64:d2 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44397, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x99a9 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7406558 ecr 0,nop,wscale 7], length 0
13:10:00.393186  In 52:54:00:43:8a:90 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44398, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x95c1 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7407558 ecr 0,nop,wscale 7], length 0
13:10:00.393241 Out 52:54:00:fb:64:d2 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44398, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x95c1 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7407558 ecr 0,nop,wscale 7], length 0
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel

realserver:

# tcpdump -envi any port 80
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
13:10:01.497485 Out 52:54:00:43:8a:90 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44397, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x99a9 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7406558 ecr 0,nop,wscale 7], length 0
13:10:01.498096  In 52:54:00:fb:64:d2 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44397, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x99a9 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7406558 ecr 0,nop,wscale 7], length 0
13:10:02.497045 Out 52:54:00:43:8a:90 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44398, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x95c1 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7407558 ecr 0,nop,wscale 7], length 0
13:10:02.497879  In 52:54:00:fb:64:d2 ethertype IPv4 (0x0800), length 76: (tos 0x0, ttl 64, id 44398, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.122.119.48162 > 192.168.122.222.http: Flags [S], cksum 0x95c1 (correct), seq 2402463411, win 14600, options [mss 1460,sackOK,TS val 7407558 ecr 0,nop,wscale 7], length 0
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel

The standard DR iptables rule is in place on the realserver, no other rules anywhere:

# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
REDIRECT   tcp  --  0.0.0.0/0            192.168.122.222     tcp dpt:80

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Thanks.

Comment 3 Ryan O'Hara 2015-09-09 13:54:51 UTC
This is not a bug. If you want to access the VIP from either the director or one of the real servers, you must do some manual configuration. See the LVS HOWTO for details.

http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.lvs_clients_on_realservers.html

Comment 4 Marko Myllynen 2015-09-10 05:07:02 UTC
(In reply to Ryan O'Hara from comment #3)
> This is not a bug. If you want to access the VIP from either the director or
> one of the real servers, you must do some manual configuration. See the LVS
> HOWTO for details.
> 
> http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.
> lvs_clients_on_realservers.html

Thanks, I had a suspicion about this but missed the document.

I think we should have a few words about this in our guide, changing component.

Comment 5 Steven J. Levine 2015-11-06 15:49:11 UTC
So that this doesn't block 7.2 I'm noting this as 7.3, but we can update this on the Portal at any time.

Comment 6 Steven J. Levine 2017-01-05 21:09:29 UTC
This is a bug from long ago that I'm looking at, but I need some help addressing this.

It looks as though we can address this bug by adding a section to Chapter 4 here:

http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-initial-setup-VSA.html

But I'm not sure what to take from the link in comment 3 to address what Myrko says in comment 4: That the Load Balancer manual should say a "few words" about this issue. Which section of that howto should be summarized in the Load Balancer document, and do we need to provide an actual example of how to work around this issue?

Comment 7 Ryan O'Hara 2017-01-12 22:57:25 UTC
We've never supported this as far as I know, just like we've never supported having the LVS node(s) on same machines as real servers. I don't know what you want to say about this other than we don't support it.

Comment 8 Steven J. Levine 2017-02-08 16:28:34 UTC
Marko:  This is an ancient BZ but I'm just trying to clear it away.

The request is for the documentation to mention the issue of connecting LVS VIP from LVS real server -- that this requires some workaround. But Ryan notes that we don't support this anyway, so I'm not sure what, if anything, would be helpful to mention in the doc.  In general we don't enumerate all the various support issues -- that is dealt with by gss, during the planning/review stage -- although that's not a hard and fast rule. 

I think I can just close this out since we don't support this, but if you have a suggestion of what I should add to the document here I'll add that.

Comment 9 Marko Myllynen 2017-02-08 16:48:09 UTC
(In reply to Steven J. Levine from comment #8)
> Marko:  This is an ancient BZ but I'm just trying to clear it away.
> 
> The request is for the documentation to mention the issue of connecting LVS
> VIP from LVS real server -- that this requires some workaround. But Ryan
> notes that we don't support this anyway, so I'm not sure what, if anything,
> would be helpful to mention in the doc.  In general we don't enumerate all
> the various support issues -- that is dealt with by gss, during the
> planning/review stage -- although that's not a hard and fast rule. 
> 
> I think I can just close this out since we don't support this, but if you
> have a suggestion of what I should add to the document here I'll add that.

The doc Ryan shared is old and lists all sorts of hacks, for the uninitiated it's unclear what the support situation might be today. I'd perhaps add something brief like this, for example before the table at the of http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-initial-setup-VSA.html:

-----

Note that accessing the virtual IP from the load balancers or one of the real servers is not supported. Likewise, having a load balancer on same machines as a real server is not supported.

-----

Thanks.

Comment 14 Steven J. Levine 2017-02-13 22:20:46 UTC
The new note is in the copy of the document on the Portal, just before Table 4.1:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Load_Balancer_Administration/ch-initial-setup-VSA.html#s1-initial-setup-conf-VSA

Comment 15 Steven J. Levine 2017-03-01 15:47:50 UTC
Marko: aPrevious misclassification of "NOTABUG" was a total slip of the menu-click.  Thanks for correcting.