Bug 1381694
Summary: | rgw ldap: unhandled exception on invalid token input | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Matt Benjamin (redhat) <mbenjamin> |
Component: | RGW | Assignee: | Matt Benjamin (redhat) <mbenjamin> |
Status: | CLOSED ERRATA | QA Contact: | Ramakrishnan Periyasamy <rperiyas> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | cbodley, ceph-eng-bugs, hnallurv, kbader, kdreyer, kurs, mbenjamin, owasserm, sweil, tserlin, uboppana |
Target Milestone: | rc | ||
Target Release: | 2.1 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | RHEL: ceph-10.2.3-11.el7cp Ubuntu: ceph_10.2.3-12redhat1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-22 19:31:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Matt Benjamin (redhat)
2016-10-04 18:39:19 UTC
Reproduction step (from unit test): This fix can be verified by enabling LDAP authentication and sending a value of "90KLscc0Dz4U49HX-7Tx" for AWS ACCESS_KEY--this should (presumably) fail to authenticate as any RGW user but should not provoke any issue. Tried verifying this bug with proper access key which was successful but when using "90KLscc0Dz4U49HX-7Tx" AWS ACCESS_KEY rgw service is getting crashed. moving the bug to assigned state. uv1 ondisk = 0) v7 ==== 128+0+0 (1628015358 0 0) 0x7f9ce80071b0 con 0x7f9e4401ced0 -6> 2016-10-21 11:40:16.141418 7f9e54cf6700 1 -- 10.8.128.104:0/2466162087 <== osd.0 10.8.128.110:6800/14467 51 ==== osd_op_reply(564 notify.4 [watch ping cookie 140318669928896 gen 1] v0'0 uv3 ondisk = 0) v7 ==== 128+0+0 (345768585 0 0) 0x7f9d000009e0 con 0x7f9e44012150 -5> 2016-10-21 11:40:16.141429 7f9e54df7700 1 -- 10.8.128.104:0/2466162087 <== osd.7 10.8.128.109:6804/14347 21 ==== osd_op_reply(569 notify.6 [watch ping cookie 140318669933600 gen 1] v0'0 uv1 ondisk = 0) v7 ==== 128+0+0 (3006110947 0 0) 0x7f9d040009e0 con 0x7f9e44016cd0 -4> 2016-10-21 11:40:18.316123 7f9d6affd700 1 ====== starting new request req=0x7f9d6aff7710 ===== -3> 2016-10-21 11:40:18.316150 7f9d6affd700 2 req 1:0.000027::PUT /rgw/::initializing for trans_id = tx000000000000000000001-005809fea2-9807793-default -2> 2016-10-21 11:40:18.316209 7f9d6affd700 2 req 1:0.000086:s3:PUT /rgw/::getting op 1 -1> 2016-10-21 11:40:18.316221 7f9d6affd700 2 req 1:0.000098:s3:PUT /rgw/:create_bucket:authorizing 0> 2016-10-21 11:40:18.498442 7f9d6affd700 -1 *** Caught signal (Aborted) ** in thread 7f9d6affd700 thread_name:radosgw ceph version 10.2.3-8.el7cp (e1d1eac13d734c63e896531428b565fc47d51874) 1: (()+0x57050a) [0x7f9e7110250a] 2: (()+0xf370) [0x7f9e70511370] 3: (gsignal()+0x37) [0x7f9e6fa541d7] 4: (abort()+0x148) [0x7f9e6fa558c8] 5: (__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7f9e70056ab5] 6: (()+0x5ea26) [0x7f9e70054a26] 7: (()+0x5ea53) [0x7f9e70054a53] 8: (()+0x5ec73) [0x7f9e70054c73] 9: (boost::archive::iterators::transform_width<boost::archive::iterators::binary_from_base64<boost::archive::iterators::remove_whitespace<char const*>, char>, 8, 6, char>::fill()+0x459) [0x7f9e7105ef09] 10: (char* std::string::_S_construct<boost::archive::iterators::transform_width<boost::archive::iterators::binary_from_base64<boost::archive::iterators::remove_whitespace<char const*>, char>, 8, 6, char> >(boost::archive::iterators::transform_width<boost::archive::iterators::binary_from_base64<boost::archive::iterators::remove_whitespace<char const*>, char>, 8, 6, char>, boost::archive::iterators::transform_width<boost::archive::iterators::binary_from_base64<boost::archive::iterators::remove_whitespace<char const*>, char>, 8, 6, char>, std::allocator<char> const&, std::input_iterator_tag)+0x60) [0x7f9e7105ef90] 11: (RGW_Auth_S3::authorize_v2(RGWRados*, req_state*)+0x50d) [0x7f9e71054e3d] 12: (RGW_Auth_S3::authorize(RGWRados*, req_state*)+0xa9) [0x7f9e71056d29] 13: (process_request(RGWRados*, RGWREST*, RGWRequest*, RGWStreamIO*, OpsLogSocket*)+0xb9a) [0x7f9e70f6f2aa] 14: (()+0x192b3) [0x7f9e7aa272b3] 15: (()+0x2327f) [0x7f9e7aa3127f] 16: (()+0x25298) [0x7f9e7aa33298] 17: (()+0x7dc5) [0x7f9e70509dc5] 18: (clone()+0x6d) [0x7f9e6fb1673d] NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. --- logging levels --- 0/ 5 none 0/ 1 lockdep 0/ 1 context 1/ 1 crush 1/ 5 mds 1/ 5 mds_balancer 1/ 5 mds_locker 1/ 5 mds_log 1/ 5 mds_log_expire 1/ 5 mds_migrator 0/ 1 buffer 0/ 1 timer 0/ 1 filer 0/ 1 striper 0/ 1 objecter 0/ 5 rados 0/ 5 rbd 0/ 5 rbd_mirror 0/ 5 rbd_replay 0/ 5 journaler 0/ 5 objectcacher 0/ 5 client 0/ 5 osd 0/ 5 optracker 0/ 5 objclass 1/ 3 filestore 1/ 3 journal 0/ 5 ms 1/ 5 mon 0/10 monc 1/ 5 paxos 0/ 5 tp 1/ 5 auth 1/ 5 crypto 1/ 1 finisher 1/ 5 heartbeatmap 1/ 5 perfcounter 1/ 5 rgw 1/10 civetweb 1/ 5 javaclient 1/ 5 asok 1/ 1 throttle 0/ 0 refs 1/ 5 xio 1/ 5 compressor 1/ 5 newstore 1/ 5 bluestore 1/ 5 bluefs 1/ 3 bdev 1/ 5 kstore 4/ 5 rocksdb 4/ 5 leveldb 1/ 5 kinetic 1/ 5 fuse -2/-2 (syslog threshold) -1/-1 (stderr threshold) max_recent 10000 max_new 1000 log_file /var/log/ceph/ceph-rgw-magna104.log --- end dump of recent events --- Ken, If you can give the Downstream build today we can spend couple of hours and verify this Bug fix. As its a crash we are moving back this bug to 2.1 Hi Ken, Can you please provide the build location which is having fix. Regards, Ramakrishnan Moving this bug to verified state. while executing with "90KLscc0Dz4U49HX-7Tx" AWS ACCESS_KEY s3 boto script reports AccessDenied which is expected and rgw service is running without any issues. verified in ceph version [ubuntu@magna ~]$ ceph -v ceph version 10.2.3-12.el7cp (120ddb2dc963bbd3fe12b13c19f7a69422e2d039) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2815.html |