Bug 2242924
Summary: | [rgw][s3select][parquet]: Read timed out error seen while executing the query "select count(*) from s3object;" on 1.5GB parquet file | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Hemanth Sai <hmaheswa> |
Component: | RGW | Assignee: | gal salomon <gsalomon> |
Status: | CLOSED ERRATA | QA Contact: | Hemanth Sai <hmaheswa> |
Severity: | medium | Docs Contact: | Disha Walvekar <dwalveka> |
Priority: | unspecified | ||
Version: | 7.0 | CC: | ceph-eng-bugs, cephqe-warriors, dwalveka, gsalomon, mbenjamin, mkasturi, tserlin |
Target Milestone: | --- | Flags: | dwalveka:
needinfo-
gsalomon: needinfo? |
Target Release: | 7.0z2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-18.2.0-173.el9cp | Doc Type: | Bug Fix |
Doc Text: |
Previously, count(*) required the s3select engine to extract each value residing in a row, while count(0) did not extract any value. Due to this for big objects, it would make a huge difference and take up a lot of CPU space.
With this fix, the s3select-operation sends a continue-message to avoid time-out and the s3select operation completes successfully.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2024-05-07 12:09:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2270485 |
Description
Hemanth Sai
2023-10-09 18:23:42 UTC
adding relevant information for the bug fix -- `count(*)` requires the s3select engine to extract each value residing in a row, while `count(0)` does not extract any value. -- since the row-groups are big with the amount of extract-value operations, the processing takes time, and that triggers a timeout. -- the s3select-operation will send a continue-message to avoid time-out. *** Bug 2118706 has been marked as a duplicate of this bug. *** resolved on https://github.com/ceph/ceph/pull/56279 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2743 |