Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.
The official life cycle policy can be reviewed here:
http://redhat.com/rhel/lifecycle
This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:
https://access.redhat.com/
Description of problem: When I use blocking iptables to block UDP 500 on one side, then allow this port again, I'll got 2 tunnels up: 000 #2: "testcon":500 STATE_QUICK_I2 (sent QI2, IPsec SA established); EVENT_SA_REPLACE in 27789s; newest IPSEC; eroute owner; isakmp#1; idle; import:admin initiate 000 #1: "testcon":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2348s; newest ISAKMP; lastdpd=4s(seq in:17596 out:17595); idle; import:admin initiate - SNIP (iptables DROP) - 000 #3: "testcon":500 STATE_MAIN_R1 (sent MR1, expecting MI2); EVENT_RETRANSMIT in 5s; lastdpd=-1s(seq in:0 out:0); idle; import:not set 000 #2: "testcon":500 STATE_QUICK_I2 (sent QI2, IPsec SA established); EVENT_SA_REPLACE in 5s; newest IPSEC; eroute owner; isakmp#1; idle; import:admin initiate 000 #4: "testcon":500 STATE_MAIN_R1 (sent MR1, expecting MI2); EVENT_RETRANSMIT in 5s; lastdpd=-1s(seq in:0 out:0); idle; import:not set 000 #5: "testcon":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 5s; nodpd; idle; import:admin initiate 000 #3: "testcon":500 STATE_MAIN_R1 (sent MR1, expecting MI2); EVENT_RETRANSMIT in 15s; lastdpd=-1s(seq in:0 out:0); idle; import:not set 000 #2: "testcon":500 STATE_QUICK_I2 (sent QI2, IPsec SA established); EVENT_SA_EXPIRE in 5s; newest IPSEC; eroute owner; isakmp#1; idle; import:admin initiate - SNIP (iptables: UDP 500 allowed again) - 000 #11: "testcon":500 STATE_QUICK_I2 (sent QI2, IPsec SA established); EVENT_SA_REPLACE in 27402s; newest IPSEC; eroute owner; isakmp#5; idle; import:admin initiate 000 #5: "testcon":500 STATE_MAIN_I4 (ISAKMP SA established); EVENT_SA_REPLACE in 2211s; newest ISAKMP; lastdpd=2s(seq in:0 out:0); idle; import:admin initiate 000 #10: "testcon":500 STATE_QUICK_R2 (IPsec SA established); EVENT_SA_REPLACE in 28083s; isakmp#9; idle; import:not set 000 #9: "testcon":500 STATE_MAIN_R3 (sent MR3, ISAKMP SA established); EVENT_SA_REPLACE in 2882s; lastdpd=2s(seq in:7619 out:0); idle; import:not set i:ppc64|m:ppc64 root@ibm-js22-vios-03-lp1 [~]# service ipsec status IPsec running - pluto pid: 24887 pluto pid 24887 2 tunnels up some eroutes exist Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Start a tunnel 2. A> while true; do ipsec auto --status | grep STATE; sleep 10; done 3. B> iptables -A INPUT -j DROP -p udp --dport 500; sleep 200; iptables -F INPUT 4. after "STATE_MAIN_I4" on A, ctrl+c, service ipsec status Actual results: IPsec running - pluto pid: <PID number> pluto pid <PID number> 2 tunnels up some eroutes exist Expected results: IPsec running - pluto pid: <PID number> pluto pid <PID number> 1 tunnels up some eroutes exist Additional info: $ cat /etc/ipsec.conf; echo; cat /etc/ipsec.secrets version 2.0 config setup crlcheckinterval="180" strictcrlpolicy=no protostack=netkey interfaces=%defaultroute plutodebug=all conn testcon connaddrfamily=ipv4 authby=secret ike=aes-sha1 esp=aes-sha1 left=<IP address #1> leftid=<IP address #1> right=<IP address #2> rightid=<IP address #2> dpdaction=restart dpddelay=7 dpdtimeout=30 auto=add <IP address #1> <IP address #2> : PSK "secret"