Bug 1327662
| Summary: | portblock resource-agent does not send tickling tcp packets to a client connected on port specified by portblock's option | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Miroslav Lisik <mlisik> | |
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> | |
| Status: | CLOSED ERRATA | QA Contact: | Miroslav Lisik <mlisik> | |
| Severity: | unspecified | Docs Contact: | ||
| Priority: | high | |||
| Version: | 6.8 | CC: | agk, cfeist, cluster-maint, fdinitto, mkolaja, oalbrigt, royoung, salmy | |
| Target Milestone: | rc | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | resource-agents-3.9.5-41.el6 | Doc Type: | Bug Fix | |
| Doc Text: |
Cause: tickle_tcp fails to send tickle tcp packets
Consequence: new portblock resource agent isnt working as it should
Fix: Remove htons() part of "htons(IPPROTO_RAW)"
Result: tickle tcp sends tickle tcp packets
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1329547 1337109 (view as bug list) | Environment: | ||
| Last Closed: | 2017-03-21 09:27:35 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1329547 | |||
Tested and working patch: https://github.com/ClusterLabs/resource-agents/pull/789 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2017-0602.html |
Description of problem: Portblock resource-agent does not send tickling tcp packets to a client connected on port specified by portblock's option. Version-Release number of selected component (if applicable): resource-agents-3.9.5-34.el6.x86_64 How reproducible: always Steps to Reproduce: 1. Create a pacemaker resource group with an Virtual IP address [root@virt-263 ~]# pcs config Cluster Name: Cluster Corosync Nodes: virt-263 virt-267 virt-274 Pacemaker Nodes: virt-263 virt-267 virt-274 Resources: Group: G Resource: vip (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=10.34.70.84 cidr_netmask=23 Operations: start interval=0s timeout=20s (vip-start-interval-0s) stop interval=0s timeout=20s (vip-stop-interval-0s) monitor interval=10s timeout=20s (vip-monitor-interval-10s) Stonith Devices: Resource: fence-virt-263 (class=stonith type=fence_xvm) Attributes: action=reboot debug=1 pcmk_host_check=static-list pcmk_host_list=virt-263 pcmk_host_map=virt-263:virt-263.cluster-qe.lab.eng.brq.redhat.com Operations: monitor interval=60s (fence-virt-263-monitor-interval-60s) Resource: fence-virt-267 (class=stonith type=fence_xvm) Attributes: action=reboot debug=1 pcmk_host_check=static-list pcmk_host_list=virt-267 pcmk_host_map=virt-267:virt-267.cluster-qe.lab.eng.brq.redhat.com Operations: monitor interval=60s (fence-virt-267-monitor-interval-60s) Resource: fence-virt-274 (class=stonith type=fence_xvm) Attributes: action=reboot debug=1 pcmk_host_check=static-list pcmk_host_list=virt-274 pcmk_host_map=virt-274:virt-274.cluster-qe.lab.eng.brq.redhat.com Operations: monitor interval=60s (fence-virt-274-monitor-interval-60s) Fencing Levels: Location Constraints: Resource: G Enabled on: virt-263 (score:INFINITY) (id:location-G-virt-263-INFINITY) Ordering Constraints: Colocation Constraints: Resources Defaults: No defaults set Operations Defaults: No defaults set Cluster Properties: cluster-infrastructure: cman dc-version: 1.1.14-8.el6-70404b0 have-watchdog: false last-lrm-refresh: 1460724654 2. On the node with Virtual IP address (virt-263) create tcp listening port with nc: [root@virt-263 ~]# nc -k -l 10.34.70.84 5000 3. From another node connect with nc to listening port [root@virt-257 ~]# nc -p 6000 10.34.70.84 5000 4. Run tcpdump on the client node (virt-257): [root@virt-257 ~]# tcpdump -i any -nnS port 5000 and 'src 10.34.70.84' 5. Create an portblock resource with this settings: [root@virt-263 ~]# pcs resource create port_unblock portblock protocol=tcp portno=5000 action=unblock ip=10.34.70.84 tickle_dir=/tmp/tickle --group G 6. Look at the output of the tcpdump command on the client node (virt-257): [root@virt-257 ~]# tcpdump -i any -nnS port 5000 and 'src 10.34.70.84' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 byte Actual results: No tickling tcp packets are send to client side of connection. Expected results: An tickling tcp packets is send to a client side of the established connection during start of portblock resource with settings mentioned in "Steps to reproduce" section. Additional info: The portblock resource-agent is using utility "/usr/libexec/heartbeat/tickle_tcp". If the resource is set with parameters: ip=10.34.70.84, protocol=tcp, action=unblock and tickle_dir=/tmp/tickle, then the resource-agent should call utility like this: /usr/libexec/heartbeat/tickle_tcp -n 3 < /tmp/tickle/10.34.70.84 Current result of command: [root@virt-263 ~]# /usr/libexec/heartbeat/tickle_tcp -n 3 < /tmp/tickle/10.34.70.84 Failed to open raw socket (Invalid argument) Error while sending tickle ack from '10.34.70.84:5000' to '10.34.71.128:6000' [root@virt-263 ~]# echo $? 255 [root@virt-263 ~]# cat /tmp/tickle/10.34.70.84 10.34.70.84:5000 10.34.71.128:6000