Description of problem:
When we fixed https://bugzilla.redhat.com/show_bug.cgi?id=2053609, we ended up deleting the conntrack entries for services before the service flows and iptable rules were removed.
To be safer, removing conntrack entries should be done after the service flows and rules to ensure the entries don't get recreated.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Comparing verification steps in https://bugzilla.redhat.com/show_bug.cgi?id=2053609#c14, can not see the entry with the dnat from externalIP/172.31.249.55 to clusterIP/172.30.208.240 in 4.12.0-0.nightly-2022-08-15-150248.
[weliang@weliang ~]$ oc get all -o wide -n test-sctp
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/sctpclient 1/1 Running 0 5h36m 10.128.2.16 weliang-817a-w4wnl-worker-9d8ks <none> <none>
pod/sctpserver 1/1 Running 0 5h36m 10.129.2.8 weliang-817a-w4wnl-worker-nqz6f <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/sctpservice LoadBalancer 172.30.208.240 172.31.249.55 30102:30741/SCTP 5h36m name=sctpserver
## streaming traffic and checking entries simultaneously
[weliang@weliang ~]$ oc debug node/weliang-817a-w4wnl-worker-9d8ks
sh-4.4# chroot /host
sh-4.4# conntrack -E -p sctp
[NEW] sctp 132 3 src=10.128.2.16 dst=172.31.249.55 sport=40919 dport=30102 [UNREPLIED] src=10.129.2.8 dst=10.128.2.16 sport=30102 dport=40919 mark=2 zone=24
^Cconntrack v1.4.4 (conntrack-tools): 2 flow events have been shown.
Working with Weibin on this one, we were doing wrong verification steps so the previous message can be ignored.
Verified in 4.12.0-0.nightly-2022-08-31-101631 after checking conntrack entry in the correct node.
Thanks Surya for helping the debugging the issue.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.