Bug 2365146
Summary: | [rgw][s3select]: radosgw process killed with "Out of memory" while executing query "select * from s3object limit 1" on a 12GB parquet file | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | tserlin |
Component: | RGW | Assignee: | Matt Benjamin (redhat) <mbenjamin> |
Status: | VERIFIED --- | QA Contact: | Yuva Teja Sree Gayam <ygayam> |
Severity: | high | Docs Contact: | Rivka Pollack <rpollack> |
Priority: | unspecified | ||
Version: | 8.1 | CC: | ceph-eng-bugs, cephqe-warriors, gsalomon, hmaheswa, mbenjamin, mkasturi, rpollack, tserlin, vereddy, ygayam |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 8.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-19.2.1-184.el9cp | Doc Type: | Bug Fix |
Doc Text: |
.Large queries on Parquet objects no longer emit an `Out of memory` error
Previously, in some cases, when a query was processed on a Parquet object, that object was read in large chunks. This caused the Ceph Object Gateway to load a larger buffer into the memory, which was too big for low-end machines. The memory would especially be affected when Ceph Object Gateway was co-located with OSD processes, which consume a large amount of memory. With the `Out of memory` error, the OS killed the Ceph Object Gateway process.
With this fix, the there is an updated limit for the reader-buffer size for reading column chunks. The default size is now 16 MB and the size can be changed through the Ceph Object Gateway configuration file.
|
Story Points: | --- |
Clone Of: | 2252403 | Environment: | |
Last Closed: | Type: | --- | |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2252403 | ||
Bug Blocks: | 2351689, 2275323 |
Description
tserlin
2025-05-08 19:17:38 UTC
|