Description of problem:---- We configured healthmonitors for a pool to use PING method but the healthmonitor send tcp dest.port 8080 messages ( our pool configured to LB tcp on port 8080 ) Version-Release number of selected component (if applicable): python-neutron-lbaas-2015.1.2-1.el7ost.noarch openstack-neutron-lbaas-2015.1.2-1.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1.Create LB, pool, vip, members 2.Create HealthMonitor with PING method and see what messages the LB sends to members (VMs) 3. Actual results: Send non ICMP messages Expected results: Send icmp messages Additional info: https://bugs.launchpad.net/neutron/+bug/1315820/comments/3
Can you reproduce this on LBaaS v2? We're not going to fix non-critical bugs in LBaaS v1 at this point.
(In reply to Assaf Muller from comment #3) > Can you reproduce this on LBaaS v2? We're not going to fix non-critical bugs > in LBaaS v1 at this point. It looks like it is reproduced with lbaasv2 : [root@controller ~(keystone_admin)]# neutron lbaas-healthmonitor-list +--------------------------------------+------+----------------+ | id | type | admin_state_up | +--------------------------------------+------+----------------+ | d2afbd7f-e505-4829-9821-bd4b38015e3e | PING | True | +--------------------------------------+------+----------------+ [root@controller ~(keystone_admin)]# ip netns exec qlbaas-4abb9505-ab1f-4bf0-98f7-a5a36715a1c9 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 9: tapda6ed06f-1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:a8:6b:1f brd ff:ff:ff:ff:ff:ff inet 192.168.1.3/24 brd 192.168.1.255 scope global tapda6ed06f-1b valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fea8:6b1f/64 scope link valid_lft forever preferred_lft forever [root@controller ~(keystone_admin)]# ip netns exec qlbaas-4abb9505-ab1f-4bf0-98f7-a5a36715a1c9 tcpdump -vnni tapda6ed06f-1b tcpdump: listening on tapda6ed06f-1b, link-type EN10MB (Ethernet), capture size 65535 bytes 11:16:08.692042 IP (tos 0x0, ttl 64, id 19394, offset 0, flags [DF], proto TCP (6), length 60) 192.168.1.3.40070 > 192.168.1.4.80: Flags [S], cksum 0x8386 (incorrect -> 0xdbb6), seq 2219805774, win 29200, options [mss 1460,sackOK,TS val 31508364 ecr 0,nop,wscale 7], length 0 11:16:08.692522 IP (tos 0x0, ttl 64, id 53130, offset 0, flags [DF], proto TCP (6), length 40) 192.168.1.4.80 > 192.168.1.3.40070: Flags [R.], cksum 0x7f03 (correct), seq 0, ack 2219805775, win 0, length 0 11:16:11.693901 IP (tos 0x0, ttl 64, id 28126, offset 0, flags [DF], proto TCP (6), length 60) 192.168.1.3.40071 > 192.168.1.4.80: Flags [S], cksum 0x8386 (incorrect -> 0x7a21), seq 2816327323, win 29200, options [mss 1460,sackOK,TS val 31511365 ecr 0,nop,wscale 7], length 0 11:16:11.694388 IP (tos 0x0, ttl 64, id 53416, offset 0, flags [DF], proto TCP (6), length 40) 192.168.1.4.80 > 192.168.1.3.40071: Flags [R.], cksum 0x2927 (correct), seq 0, ack 2816327324, win 0, length 0 11:16:14.695176 IP (tos 0x0, ttl 64, id 60179, offset 0, flags [DF], proto TCP (6), length 60) 192.168.1.3.40072 > 192.168.1.4.80: Flags [S], cksum 0x8386 (incorrect -> 0x706a), seq 780744172, win 29200, options [mss 1460,sackOK,TS val 31514367 ecr 0,nop,wscale 7], length 0 11:16:14.695623 IP (tos 0x0, ttl 64, id 53615, offset 0, flags [DF], proto TCP (6), length 40) 192.168.1.4.80 > 192.168.1.3.40072: Flags [R.], cksum 0x2b2a (correct), seq 0, ack 780744173, win 0, length 0
It looks like haproxy doesn't support ping health checks, and that the API doesn't both to let the user know, so it just accepts the request, returns success, but then does something the user didn't intend, which is to use TCP for health checks instead of ICMP.
Due to the team's capacity and prioritization I'd like to close the bug. The focus now is on Octavia and critical LBaaS haproxy bugs. I'd rather be truthful than let this bug sit around for years.