This issue might be quite difficult to reproduce, and it was actually reported with Podman at https://github.com/containers/podman/issues/17703. Running pasta (or 'podman run --net=pasta ...') with default MTU options (65 520 bytes), and initiating a TCP transfer (e.g. HTTP downloads), if the sender manages to send frames with a given timing, so that we'll try to transfer two frames of 65 534 bytes over the tap device at a time (65 500 to 65 520 bytes of payload), this results in an error in TCP segmentation: pasta will try to re-transfer the same (bad) frame over and over, without dequeueing it from the socket receive buffer, CPU utilisation will spike, and the download never succeeds. This is quite critical as it results in a complete failure of the given TCP transfer, once we hit this condition, and in high CPU utilisation. I would recommend a SanityOnly verification: I couldn't reproduce this on my local setup and I had to rely on a setup provided by the bug reporter. We need this upstream fix: https://passt.top/passt/commit/?id=d7272f1df89c099a7e98ae43d1ef9b936c7e46f7 tcp: Clamp MSS value when queueing data to tap, also for pasta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (passt bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2292