Bug 1766448 - [cee/sd] RGW daemons crashed with segmentation fault after upgrading to RHCS 3.3.1
Summary: [cee/sd] RGW daemons crashed with segmentation fault after upgrading to RHCS ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 3.3
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: z1
: 3.3
Assignee: Matt Benjamin (redhat)
QA Contact: Tejas
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1726135 1727980 1733598
TreeView+ depends on / blocked
 
Reported: 2019-10-29 05:13 UTC by Ashish Singh
Modified: 2023-03-24 15:48 UTC (History)
20 users (show)

Fixed In Version: RHEL: ceph-12.2.12-79.el7cp Ubuntu: ceph_12.2.12-72redhat1
Doc Type: Bug Fix
Doc Text:
.Ceph Object Gateway daemons no longer crash after upgrading to the latest version Latest update to {product} introduces a bug that caused Ceph Object Gateway daemons to terminate unexpectedly with a segmentation fault after upgrading to the latest version. The underlying source code has been fixed, and Ceph Object Gateway daemons work as expected after the upgrade.
Clone Of:
Environment:
Last Closed: 2019-11-08 16:14:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4544561 0 None None None 2019-10-31 18:46:34 UTC
Red Hat Product Errata RHBA-2019:3815 0 None None None 2019-11-08 16:14:09 UTC

Description Ashish Singh 2019-10-29 05:13:32 UTC
Created attachment 1629985 [details]
RGW logs

* Description of problem:
All RGW daemons in the Ceph cluster crashed with Segmentation fault :

2019-10-24 10:35:56.813839 7f9a20fd3700  1 civetweb: 0x55a720ef2000: 10.134.32.28 - - [24/Oct/2019:10:35:56 +0200] "GET /cc-droplets/?prefix=buildpack_cache%2Fb7%2F09%2Fb70908c0-5e3e-4fd4-8489-69c3407b996a HTTP/1.1" 200 849 - fog-core/1.43.0
2019-10-24 10:35:56.816960 7f9a207d2700 -1 *** Caught signal (Segmentation fault) **
 in thread 7f9a207d2700 thread_name:civetweb-worker

 ceph version 12.2.12-74.el7cp (6c4a9c2235eb0c7e3d61719cdc1d6b7b2dcbdea9) luminous (stable)
 1: (()+0x2d82a1) [0x55a71e13f2a1]
 2: (()+0xf630) [0x7f9a5719e630]
 3: (ceph_str_hash_rjenkins(char const*, unsigned int)+0x20) [0x7f9a4e9c4500]
 4: (pg_pool_t::hash_key(std::string const&, std::string const&) const+0xc0) [0x7f9a4e95e980]
 5: (OSDMap::map_to_pg(long, std::string const&, std::string const&, std::string const&, pg_t*) const+0x75) [0x7f9a4e917975]
 6: (OSDMap::object_locator_to_pg(object_t const&, object_locator_t const&, pg_t&) const+0xa6) [0x7f9a4e917a66]
 7: (()+0xcc447) [0x7f9a58159447]
 8: (()+0xd9c59) [0x7f9a58166c59]
 9: (()+0xe6df8) [0x7f9a58173df8]
 10: (()+0xe707a) [0x7f9a5817407a]
 11: (librados::IoCtxImpl::aio_operate(object_t const&, ObjectOperation*, librados::AioCompletionImpl*, SnapContext const&, int, blkin_trace_info const*)+0x1a1) [0x7f9a58124191]
 12: (librados::IoCtx::aio_operate(std::string const&, librados::AioCompletion*, librados::ObjectWriteOperation*)+0x53) [0x7f9a580e6d33]
 13: (RGWRados::cls_bucket_list_ordered(RGWBucketInfo&, int, cls_rgw_obj_key const&, std::string const&, unsigned int, bool, std::map<std::string, rgw_bucket_dir_entry, std::less<std::string>, std::allocator<std::pair<std::string const, rgw_bucket_dir_entry> > >&, bool*, cls_rgw_obj_key*, bool (*)(std::string const&))+0x13f1) [0x55a71e2cb501]
 14: (RGWRados::Bucket::List::list_objects_ordered(long, std::vector<rgw_bucket_dir_entry, std::allocator<rgw_bucket_dir_entry> >*, std::map<std::string, bool, std::less<std::string>, std::allocator<std::pair<std::string const, bool> > >*, bool*)+0x3fa) [0x55a71e2cbc4a]
 15: (RGWListBucket::execute()+0x25c) [0x55a71e22481c]
 16: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&, RGWRequest*, req_state*, bool)+0x188) [0x55a71e25a8b8]
 17: (process_request(RGWRados*, RGWREST*, RGWRequest*, std::string const&, rgw::auth::StrategyRegistry const&, RGWRestfulIO*, OpsLogSocket*, int*)+0xb88) [0x55a71e25b678]
 18: (RGWCivetWebFrontend::process(mg_connection*)+0x3a2) [0x55a71e0bdaa2]
 19: (()+0x2c8517) [0x55a71e12f517]
 20: (()+0x2c9dd2) [0x55a71e130dd2]
 21: (()+0x2ca5a8) [0x55a71e1315a8]
 22: (()+0x7ea5) [0x7f9a57196ea5]
 23: (clone()+0x6d) [0x7f9a4b6348cd]


* Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3.3.1

* How reproducible:
NA

* Steps to Reproduce:
NA

* Actual results:
-

* Expected results:
-

* Additional info:
-

Comment 30 errata-xmlrpc 2019-11-08 16:14:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3815


Note You need to log in before you can comment on or make changes to this bug.