Bug 2363682 (CVE-2023-53076) - CVE-2023-53076 kernel: bpf: Adjust insufficient default bpf_jit_limit
Summary: CVE-2023-53076 kernel: bpf: Adjust insufficient default bpf_jit_limit
Keywords:
Status: NEW
Alias: CVE-2023-53076
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-05-02 16:01 UTC by OSIDB Bzimport
Modified: 2025-05-29 06:39 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-05-02 16:01:30 UTC
In the Linux kernel, the following vulnerability has been resolved:

bpf: Adjust insufficient default bpf_jit_limit

We've seen recent AWS EKS (Kubernetes) user reports like the following:

  After upgrading EKS nodes from v20230203 to v20230217 on our 1.24 EKS
  clusters after a few days a number of the nodes have containers stuck
  in ContainerCreating state or liveness/readiness probes reporting the
  following error:

    Readiness probe errored: rpc error: code = Unknown desc = failed to
    exec in container: failed to start exec "4a11039f730203ffc003b7[...]":
    OCI runtime exec failed: exec failed: unable to start container process:
    unable to init seccomp: error loading seccomp filter into kernel:
    error loading seccomp filter: errno 524: unknown

  However, we had not been seeing this issue on previous AMIs and it only
  started to occur on v20230217 (following the upgrade from kernel 5.4 to
  5.10) with no other changes to the underlying cluster or workloads.

  We tried the suggestions from that issue (sysctl net.core.bpf_jit_limit=452534528)
  which helped to immediately allow containers to be created and probes to
  execute but after approximately a day the issue returned and the value
  returned by cat /proc/vmallocinfo | grep bpf_jit | awk '{s+=$2} END {print s}'
  was steadily increasing.

I tested bpf tree to observe bpf_jit_charge_modmem, bpf_jit_uncharge_modmem
their sizes passed in as well as bpf_jit_current under tcpdump BPF filter,
seccomp BPF and native (e)BPF programs, and the behavior all looks sane
and expected, that is nothing "leaking" from an upstream perspective.

The bpf_jit_limit knob was originally added in order to avoid a situation
where unprivileged applications loading BPF programs (e.g. seccomp BPF
policies) consuming all the module memory space via BPF JIT such that loading
of kernel modules would be prevented. The default limit was defined back in
2018 and while good enough back then, we are generally seeing far more BPF
consumers today.

Adjust the limit for the BPF JIT pool from originally 1/4 to now 1/2 of the
module memory space to better reflect today's needs and avoid more users
running into potentially hard to debug issues.

Comment 1 Avinash Hanwate 2025-05-05 05:29:38 UTC
Upstream advisory:
https://lore.kernel.org/linux-cve-announce/2025050215-CVE-2023-53076-d1a7@gregkh/T

Comment 4 TEJ RATHI 2025-05-29 06:39:00 UTC
This CVE has been rejected by the Linux kernel community. Refer to the announcement: https://lore.kernel.org/linux-cve-announce/2025050524-REJECTED-d9bd@gregkh/

Comment added by: Automated Script


Note You need to log in before you can comment on or make changes to this bug.