Bug 1373429

Summary: ERROR STREAM_IO with S3 GET/PUT operations
Product: Red Hat Ceph Storage Reporter: karan singh <karan>
Component: RGWAssignee: Matt Benjamin (redhat) <mbenjamin>
Status: CLOSED NOTABUG QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 2.0CC: cbodley, ceph-eng-bugs, hnallurv, kbader, mbenjamin, nlevine, owasserm, sweil
Target Milestone: rc   
Target Release: 3.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-16 18:11:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description karan singh 2016-09-06 09:15:53 UTC
Description of problem:

While doing benchmarking on RHCS2.0 object storage , i see lot of failures during S3 GET requests. (Error logs provided below)

I have found a similary issue upstream http://tracker.ceph.com/issues/15727 not sure what's the progress going on with this.

----------------------------------------------------------------------------------
2016-09-06 09:52:07.030345 7f6e5c360700  1 ====== starting new request req=0x7f6e5c35a710 =====
2016-09-06 09:52:07.030365 7f6e5c360700  2 req 387740:0.000020::GET /mycontainers90/myobjects431::initializing for trans_id = tx00000000000000005ea9c-0057ce6797-cb362-default
2016-09-06 09:52:07.030389 7f6e5c360700  2 req 387740:0.000045:s3:GET /mycontainers90/myobjects431::getting op 0
2016-09-06 09:52:07.030395 7f6e5c360700  2 req 387740:0.000050:s3:GET /mycontainers90/myobjects431:get_obj:authorizing
2016-09-06 09:52:07.030451 7f6e5c360700  2 req 387740:0.000106:s3:GET /mycontainers90/myobjects431:get_obj:normalizing buckets and tenants
2016-09-06 09:52:07.030457 7f6e5c360700  2 req 387740:0.000112:s3:GET /mycontainers90/myobjects431:get_obj:init permissions
2016-09-06 09:52:07.030482 7f6e5c360700  2 req 387740:0.000137:s3:GET /mycontainers90/myobjects431:get_obj:recalculating target
2016-09-06 09:52:07.030486 7f6e5c360700  2 req 387740:0.000141:s3:GET /mycontainers90/myobjects431:get_obj:reading permissions
2016-09-06 09:52:07.035030 7f6cf0088700  0 defer chain tag=fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.551440.106107
2016-09-06 09:52:07.036430 7f6cf0088700  0 ERROR: flush_read_list(): d->client_c->handle_data() returned -5
2016-09-06 09:52:07.039092 7f6cf0088700  0 WARNING: set_req_state_err err_no=5 resorting to 500
2016-09-06 09:52:07.039130 7f6cf0088700  0 ERROR: s->cio->send_content_length() returned err=-5
2016-09-06 09:52:07.039133 7f6cf0088700  0 ERROR: s->cio->print() returned err=-5
2016-09-06 09:52:07.039134 7f6cf0088700  0 ERROR: STREAM_IO(s)->print() returned err=-5
2016-09-06 09:52:07.039143 7f6cf0088700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 09:52:07.039156 7f6cf0088700  2 req 386004:85.277926:s3:GET /mycontainers66/myobjects467:get_obj:completing
2016-09-06 09:52:07.039162 7f6cf0088700  2 req 386004:85.277933:s3:GET /mycontainers66/myobjects467:get_obj:op status=-5
2016-09-06 09:52:07.039163 7f6cf0088700  2 req 386004:85.277934:s3:GET /mycontainers66/myobjects467:get_obj:http status=500
2016-09-06 09:52:07.039170 7f6cf0088700  1 ====== req done req=0x7f6cf0082710 op status=-5 http_status=500 ======
----------------------------------------------------------------------------------

More Logs here : http://paste.fedoraproject.org/422646/31532281/

Version-Release number of selected component (if applicable):


How reproducible:

My setup details are:

6 Nodes = OSD + RGW + HAProxy
Each node with 35 disks + 2NVMe ( total 210 OSDs )
1 Monitor node
RHCS2.0
COSBench to generate load ( upto 12 COSBench clients and 1 Controller )
Tested using S3 API
Cluster has ~5Million RADOS objects

Ceph RGWs are colocated on OSD nodes so 6 RGW (we know it's currently not supported but this is what we are benchmarking)

When i run COSBench tests with less number of clients ( 1,2,4 ) everything works without failure , as soon as i increase number of clients more than 4 , things starts breaking and i see lot of show requests on Ceph cluster.

Not sure if there is some OSD side slowness which is causing this failure. ( see SJust's comment in above tracker.ceph.com bug)

Comment 3 karan singh 2016-09-06 10:15:47 UTC
 With the same setup , i am facing problems with PUTS's too ( logs below )

Again if number of clients pushing data to RGW are less say ( 1,2,4) it works as soon as i increases clients to 8 PUT fails


----------------------------------------------------------------------------------

2016-09-06 12:54:18.838473 7f7070f89700  2 req 427532:316.722930:s3:PUT /6rgw-8client-s3-32m-t1024-r7-63/myobjects63:put_obj:completing
2016-09-06 12:54:18.838521 7f7070f89700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:18.838529 7f7070f89700  2 req 427532:316.722988:s3:PUT /6rgw-8client-s3-32m-t1024-r7-63/myobjects63:put_obj:op status=-1
2016-09-06 12:54:18.838531 7f7070f89700  2 req 427532:316.722990:s3:PUT /6rgw-8client-s3-32m-t1024-r7-63/myobjects63:put_obj:http status=403
2016-09-06 12:54:18.838537 7f7070f89700  1 ====== req done req=0x7f7070f83710 op status=-1 http_status=403 ======


2016-09-06 12:54:13.366850 7f6e76394700  2 req 427953:189.691843:s3:PUT /6rgw-8client-s3-32m-t1024-r7-3/myobjects81:put_obj:completing
2016-09-06 12:54:13.366909 7f6e76394700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:13.366919 7f6e76394700  2 req 427953:189.691913:s3:PUT /6rgw-8client-s3-32m-t1024-r7-3/myobjects81:put_obj:op status=-1
2016-09-06 12:54:13.366921 7f6e76394700  2 req 427953:189.691916:s3:PUT /6rgw-8client-s3-32m-t1024-r7-3/myobjects81:put_obj:http status=403
2016-09-06 12:54:13.366928 7f6e76394700  1 ====== req done req=0x7f6e7638e710 op status=-1 http_status=403 ======

2016-09-06 12:54:08.904990 7f7022eed700  2 req 427402:347.864127:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects415:put_obj:completing
2016-09-06 12:54:08.905051 7f7022eed700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:08.905071 7f7022eed700  2 req 427402:347.864209:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects415:put_obj:op status=-1
2016-09-06 12:54:08.905079 7f7022eed700  2 req 427402:347.864216:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects415:put_obj:http status=403
2016-09-06 12:54:08.905085 7f7022eed700  1 ====== req done req=0x7f7022ee7710 op status=-1 http_status=403 ======

2016-09-06 12:54:00.651739 7f6d2e104700  2 req 427224:399.778858:s3:PUT /6rgw-8client-s3-32m-t1024-r7-8/myobjects62:put_obj:completing
2016-09-06 12:54:00.651802 7f6d2e104700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:00.651813 7f6d2e104700  2 req 427224:399.778934:s3:PUT /6rgw-8client-s3-32m-t1024-r7-8/myobjects62:put_obj:op status=-1
2016-09-06 12:54:00.651816 7f6d2e104700  2 req 427224:399.778936:s3:PUT /6rgw-8client-s3-32m-t1024-r7-8/myobjects62:put_obj:http status=403
2016-09-06 12:54:00.651822 7f6d2e104700  1 ====== req done req=0x7f6d2e0fe710 op status=-1 http_status=403 ======
2016-09-06 12:54:00.651912 7f6d2e104700  1 civetweb: 0x7f711474eba0: 10.5.13.118 - - [06/Sep/2016:12:47:20 +0300] "PUT /6rgw-8client-s3-32m-t1024-r7-8/myobjects62 HTTP/1.1" 403 0 - aws-sdk-java/1.4.0.1 Linux/3.10.0-327.18.2.el7.x86_64 OpenJDK_64-Bit_Server_VM/25.91-b14
2016-09-06 12:54:00.654895 7f72b1ff3700  0 gc::process: removing default.rgw.buckets.data:fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.717867.2384__shadow_.hA67vdJwZTjm_9r9BfOyi3q0aay3pM7_1
2016-09-06 12:54:00.669264 7f6ef9c9b700  2 req 428018:162.454003:s3:PUT /6rgw-8client-s3-32m-t1024-r7-5/myobjects358:put_obj:completing
2016-09-06 12:54:00.669328 7f6ef9c9b700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:00.669339 7f6ef9c9b700  2 req 428018:162.454078:s3:PUT /6rgw-8client-s3-32m-t1024-r7-5/myobjects358:put_obj:op status=-1
2016-09-06 12:54:00.669352 7f6ef9c9b700  2 req 428018:162.454080:s3:PUT /6rgw-8client-s3-32m-t1024-r7-5/myobjects358:put_obj:http status=403
2016-09-06 12:54:00.669360 7f6ef9c9b700  1 ====== req done req=0x7f6ef9c95710 op status=-1 http_status=403 ======
2016-09-06 12:54:00.669418 7f6ef9c9b700  1 civetweb: 0x7f70ac38be20: 10.5.13.118 - - [06/Sep/2016:12:51:18 +0300] "PUT /6rgw-8client-s3-32m-t1024-r7-5/myobjects358 HTTP/1.1" 403 0 - aws-sdk-java/1.4.0.1 Linux/3.10.0-327.18.2.el7.x86_64 OpenJDK_64-Bit_Server_VM/25.91-b14
2016-09-06 12:54:00.682204 7f6f79d9b700  2 req 427241:391.668799:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects366:put_obj:completing
2016-09-06 12:54:00.682253 7f6f79d9b700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 12:54:00.682261 7f6f79d9b700  2 req 427241:391.668861:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects366:put_obj:op status=-1
2016-09-06 12:54:00.682262 7f6f79d9b700  2 req 427241:391.668863:s3:PUT /6rgw-8client-s3-32m-t1024-r7-6/myobjects366:put_obj:http status=403
2016-09-06 12:54:00.682267 7f6f79d9b700  1 ====== req done req=0x7f6f79d95710 op status=-1 http_status=403 ======
----------------------------------------------------------------------------------

Comment 4 karan singh 2016-09-06 10:45:52 UTC
Here are some more logs

debug rgw = 10
debug civetweb = 10
debug ms =1

----------------------------------------------------------------------------------

2016-09-06 13:33:42.119220 7f2e30b71700  1 ====== starting new request req=0x7f2e30b6b710 =====
2016-09-06 13:33:42.119239 7f2e30b71700  2 req 11268:0.000019::PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198::initializing for trans_id = tx000000000000000002c04-0057ce9b86-ce01d-default
2016-09-06 13:33:42.119251 7f2e30b71700 10 host=10.5.13.118
2016-09-06 13:33:42.119298 7f2e30b71700 10 handler=22RGWHandler_REST_Obj_S3
2016-09-06 13:33:42.119302 7f2e30b71700  2 req 11268:0.000082:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198::getting op 1
2016-09-06 13:33:42.119305 7f2e30b71700 10 op=21RGWPutObj_ObjStore_S3
2016-09-06 13:33:42.119306 7f2e30b71700  2 req 11268:0.000087:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:authorizing
2016-09-06 13:33:42.119341 7f2e30b71700 10 get_canon_resource(): dest=/6rgw-8client-s3-32m-t1024-r7-2/myobjects198
2016-09-06 13:33:42.119346 7f2e30b71700 10 auth_hdr:
PUT

application/octet-stream
Tue, 06 Sep 2016 10:31:58 GMT
/6rgw-8client-s3-32m-t1024-r7-2/myobjects198
2016-09-06 13:33:42.119384 7f2e30b71700  2 req 11268:0.000165:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:normalizing buckets and tenants
2016-09-06 13:33:42.119390 7f2e30b71700 10 s->object=myobjects198 s->bucket=6rgw-8client-s3-32m-t1024-r7-2
2016-09-06 13:33:42.119400 7f2e30b71700  2 req 11268:0.000180:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:init permissions
2016-09-06 13:33:42.119420 7f2e30b71700  2 req 11268:0.000200:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:recalculating target
2016-09-06 13:33:42.119424 7f2e30b71700  2 req 11268:0.000204:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:reading permissions
2016-09-06 13:33:42.119426 7f2e30b71700  2 req 11268:0.000206:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:init op
2016-09-06 13:33:42.119428 7f2e30b71700  2 req 11268:0.000208:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:verifying op mask
2016-09-06 13:33:42.119430 7f2e30b71700  2 req 11268:0.000210:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:verifying op permissions
2016-09-06 13:33:42.119433 7f2e30b71700  5 Searching permissions for uid=user1 mask=50
2016-09-06 13:33:42.119435 7f2e30b71700  5 Found permission: 15
2016-09-06 13:33:42.119436 7f2e30b71700  5 Searching permissions for group=1 mask=50
2016-09-06 13:33:42.119437 7f2e30b71700  5 Permissions for group not found
2016-09-06 13:33:42.119441 7f2e30b71700  5 Searching permissions for group=2 mask=50
2016-09-06 13:33:42.119442 7f2e30b71700  5 Permissions for group not found
2016-09-06 13:33:42.119443 7f2e30b71700  5 Getting permissions id=user1 owner=user1 perm=2
2016-09-06 13:33:42.119445 7f2e30b71700 10  uid=user1 requested perm (type)=2, policy perm=2, user_perm_mask=2, acl perm=2
2016-09-06 13:33:42.119447 7f2e30b71700  2 req 11268:0.000227:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:verifying op params
2016-09-06 13:33:42.119449 7f2e30b71700  2 req 11268:0.000229:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:pre-executing
2016-09-06 13:33:42.119465 7f2e30b71700  2 req 11268:0.000245:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:executing
2016-09-06 13:33:42.119629 7f2e30b71700  2 req 11268:0.000409:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:completing
2016-09-06 13:33:42.119666 7f2e30b71700  0 ERROR: STREAM_IO(s)->complete_header() returned err=-5
2016-09-06 13:33:42.119674 7f2e30b71700  2 req 11268:0.000454:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:op status=-2010
2016-09-06 13:33:42.119678 7f2e30b71700  2 req 11268:0.000458:s3:PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198:put_obj:http status=400
2016-09-06 13:33:42.119681 7f2e30b71700  1 ====== req done req=0x7f2e30b6b710 op status=-2010 http_status=400 ======
2016-09-06 13:33:42.119706 7f2e30b71700  1 civetweb: 0x7f3144664920: 10.5.13.118 - - [06/Sep/2016:13:33:42 +0300] "PUT /6rgw-8client-s3-32m-t1024-r7-2/myobjects198 HTTP/1.1" 400 0 - aws-sdk-java/1.4.0.1 Linux/3.10.0-327.18.2.el7.x86_64 OpenJDK_64-Bit_Server_VM/25.91-b14
2016-09-06 13:33:42.120149 7f2827322700  1 -- 10.5.13.118:0/3746027875 <== osd.10 10.5.13.118:6878/1587375 23 ==== osd_op_reply(1276 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.pkRsGh_bGGlrEHAnz-9X4-YxcX5CTKA_6 [write 3670016~524288] v2131'7818 uv7818 ack = 0) v7 ==== 210+0+0 (60462810 0 0) 0x7f2b643eae30 con 0x7f2b29705510
2016-09-06 13:33:42.120241 7f2b1ab2b700  1 -- 10.5.13.118:0/1172345512 <== osd.94 10.5.13.138:6821/3258657 11 ==== osd_op_reply(1149 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.hOW1aa4QwOh_MJiSmZzTeRZvY0lOMXJ_1 [write 1572864~524288] v2131'6765 uv6765 ack = 0) v7 ==== 210+0+0 (1460526422 0 0) 0x7f2b2ab54b00 con 0x7f325d9b1480
2016-09-06 13:33:42.120316 7f2da8a61700  1 -- 10.5.13.118:0/3746027875 --> 10.5.13.139:6849/2276476 -- osd_op(client.843860.0:1319 161.2cb8fd90 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142_myobjects403 [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e2131) v7 -- ?+0 0x7f2b6635d280 con 0x7f3685d239e0
2016-09-06 13:33:42.120691 7f2827322700  1 -- 10.5.13.118:0/3746027875 <== osd.10 10.5.13.118:6878/1587375 24 ==== osd_op_reply(1276 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.pkRsGh_bGGlrEHAnz-9X4-YxcX5CTKA_6 [write 3670016~524288] v2131'7818 uv7818 ondisk = 0) v7 ==== 210+0+0 (273823120 0 0) 0x7f2b6635d280 con 0x7f2b29705510
2016-09-06 13:33:42.120984 7f2b1ab2b700  1 -- 10.5.13.118:0/1172345512 <== osd.94 10.5.13.138:6821/3258657 12 ==== osd_op_reply(1149 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.hOW1aa4QwOh_MJiSmZzTeRZvY0lOMXJ_1 [write 1572864~524288] v2131'6765 uv6765 ondisk = 0) v7 ==== 210+0+0 (1153793564 0 0) 0x7f2b29b27e10 con 0x7f325d9b1480
2016-09-06 13:33:42.121197 7f2d8f22e700  1 -- 10.5.13.118:0/1172345512 --> 10.5.13.139:6973/2278155 -- osd_op(client.843847.0:1166 161.319ebf15 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.hOW1aa4QwOh_MJiSmZzTeRZvY0lOMXJ_3 [write 1048576~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b6635d280 con 0x7f3145539e40
2016-09-06 13:33:42.121772 7f2da625c700  1 ====== starting new request req=0x7f2da6256710 =====
----------------------------------------------------------------------------------


Below are the logs of another PUT request which was successful. 


----------------------------------------------------------------------------------

2016-09-06 13:33:42.121772 7f2da625c700  1 ====== starting new request req=0x7f2da6256710 =====
2016-09-06 13:33:42.121796 7f2da625c700  2 req 11269:0.000024::PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199::initializing for trans_id = tx000000000000000002c05-0057ce9b86-ce01d-default
2016-09-06 13:33:42.121803 7f2da625c700 10 host=10.5.13.118
2016-09-06 13:33:42.121830 7f2da625c700 10 handler=22RGWHandler_REST_Obj_S3
2016-09-06 13:33:42.121835 7f2da625c700  2 req 11269:0.000062:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199::getting op 1
2016-09-06 13:33:42.121839 7f2da625c700 10 op=21RGWPutObj_ObjStore_S3
2016-09-06 13:33:42.121841 7f2da625c700  2 req 11269:0.000069:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:authorizing
2016-09-06 13:33:42.121869 7f2da625c700 10 get_canon_resource(): dest=/6rgw-8client-s3-32m-t1024-r7-4/myobjects199
2016-09-06 13:33:42.121874 7f2da625c700 10 auth_hdr:
PUT

application/octet-stream
Tue, 06 Sep 2016 10:32:58 GMT
/6rgw-8client-s3-32m-t1024-r7-4/myobjects199
2016-09-06 13:33:42.121915 7f2da625c700  2 req 11269:0.000142:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:normalizing buckets and tenants
2016-09-06 13:33:42.121922 7f2da625c700 10 s->object=myobjects199 s->bucket=6rgw-8client-s3-32m-t1024-r7-4
2016-09-06 13:33:42.121925 7f2da625c700  2 req 11269:0.000153:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:init permissions
2016-09-06 13:33:42.121952 7f2da625c700  2 req 11269:0.000179:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:recalculating target
2016-09-06 13:33:42.121958 7f2da625c700  2 req 11269:0.000186:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:reading permissions
2016-09-06 13:33:42.121961 7f2da625c700  2 req 11269:0.000188:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:init op
2016-09-06 13:33:42.121963 7f2da625c700  2 req 11269:0.000191:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:verifying op mask
2016-09-06 13:33:42.121968 7f2da625c700  2 req 11269:0.000194:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:verifying op permissions
2016-09-06 13:33:42.121971 7f2da625c700  5 Searching permissions for uid=user1 mask=50
2016-09-06 13:33:42.121973 7f2da625c700  5 Found permission: 15
2016-09-06 13:33:42.121975 7f2da625c700  5 Searching permissions for group=1 mask=50
2016-09-06 13:33:42.121976 7f2da625c700  5 Permissions for group not found
2016-09-06 13:33:42.121978 7f2da625c700  5 Searching permissions for group=2 mask=50
2016-09-06 13:33:42.121980 7f2da625c700  5 Permissions for group not found
2016-09-06 13:33:42.121981 7f2da625c700  5 Getting permissions id=user1 owner=user1 perm=2
2016-09-06 13:33:42.121983 7f2da625c700 10  uid=user1 requested perm (type)=2, policy perm=2, user_perm_mask=2, acl perm=2
2016-09-06 13:33:42.121985 7f2da625c700  2 req 11269:0.000213:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:verifying op params
2016-09-06 13:33:42.121988 7f2da625c700  2 req 11269:0.000216:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:pre-executing
2016-09-06 13:33:42.122009 7f2da625c700  2 req 11269:0.000237:s3:PUT /6rgw-8client-s3-32m-t1024-r7-4/myobjects199:put_obj:executing
2016-09-06 13:33:42.126340 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1234 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [writefull 0~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325e3caee0 con 0x7f2b5c8888d0
2016-09-06 13:33:42.128281 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1235 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 524288~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b29608d10 con 0x7f2b5c8888d0
2016-09-06 13:33:42.129987 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1236 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 1048576~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b29b1c120 con 0x7f2b5c8888d0
2016-09-06 13:33:42.131831 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1237 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 1572864~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325f1d84d0 con 0x7f2b5c8888d0
2016-09-06 13:33:42.133902 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1238 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 2097152~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325d119520 con 0x7f2b5c8888d0
2016-09-06 13:33:42.134679 7f33daffd700  1 -- 10.5.13.118:0/1172345512 --> 10.5.13.138:6855/3258630 -- ping magic: 0 v1 -- ?+0 0x7f2b5e1d47a0 con 0x7f2b645c0710
2016-09-06 13:33:42.134728 7f329d7fa700  1 -- 10.5.13.118:0/3572538537 --> 10.5.13.132:6927/2296623 -- ping magic: 0 v1 -- ?+0 0x7f2b645d3820 con 0x7f3687f83c30
2016-09-06 13:33:42.134742 7f329d7fa700  1 -- 10.5.13.118:0/3572538537 --> 10.5.13.139:6908/2278048 -- ping magic: 0 v1 -- ?+0 0x7f2b649c26a0 con 0x7f36874440f0
2016-09-06 13:33:42.134815 7f346f7fe700  1 -- 10.5.13.118:0/1609214345 --> 10.5.13.138:6979/3258661 -- ping magic: 0 v1 -- ?+0 0x7f2b64b57600 con 0x7f325de8f750
2016-09-06 13:33:42.134829 7f346f7fe700  1 -- 10.5.13.118:0/1609214345 --> 10.5.13.138:6894/3258709 -- ping magic: 0 v1 -- ?+0 0x7f2b654e6d80 con 0x7f325cc71be0
2016-09-06 13:33:42.134931 7f35d67fc700  1 -- 10.5.13.118:0/2714467722 --> 10.5.13.139:6839/2277865 -- ping magic: 0 v1 -- ?+0 0x7f31459fe6c0 con 0x7f3145b551d0
2016-09-06 13:33:42.134946 7f35d67fc700  1 -- 10.5.13.118:0/2714467722 --> 10.5.13.136:6812/1470817 -- ping magic: 0 v1 -- ?+0 0x7f2b66d03460 con 0x7f314546ada0
2016-09-06 13:33:42.134976 7f34bdffb700  1 -- 10.5.13.118:0/2650545525 --> 10.5.13.135:6832/1062084 -- ping magic: 0 v1 -- ?+0 0x7f325f2d2d50 con 0x7f325efed1c0
2016-09-06 13:33:42.134988 7f34bdffb700  1 -- 10.5.13.118:0/2650545525 --> 10.5.13.139:6949/2278107 -- ping magic: 0 v1 -- ?+0 0x7f325e4d7590 con 0x7f2b284c4f30
2016-09-06 13:33:42.135134 7f366aff5700  1 -- 10.5.13.118:0/3973283976 --> 10.5.13.139:6839/2277865 -- ping magic: 0 v1 -- ?+0 0x7f2b649c26a0 con 0x7f2b5c668de0
2016-09-06 13:33:42.135146 7f366aff5700  1 -- 10.5.13.118:0/3973283976 --> 10.5.13.139:6891/2278057 -- ping magic: 0 v1 -- ?+0 0x7f2b64b57600 con 0x7f32d40ddd60
2016-09-06 13:33:42.135153 7f366aff5700  1 -- 10.5.13.118:0/3973283976 --> 10.5.13.135:6813/1062079 -- ping magic: 0 v1 -- ?+0 0x7f2b66d03460 con 0x7f3685c60170
2016-09-06 13:33:42.135622 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1239 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 2621440~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b28a90b20 con 0x7f2b5c8888d0
2016-09-06 13:33:42.136109 7f3520ff9700  1 -- 10.5.13.118:0/2944301223 --> 10.5.13.135:6924/1063606 -- ping magic: 0 v1 -- ?+0 0x7f2b29676800 con 0x7f3144d8eb70
2016-09-06 13:33:42.136123 7f3520ff9700  1 -- 10.5.13.118:0/2944301223 --> 10.5.13.139:6891/2278057 -- ping magic: 0 v1 -- ?+0 0x7f325f2b05c0 con 0x7f2b640de0b0
2016-09-06 13:33:42.136128 7f3520ff9700  1 -- 10.5.13.118:0/2944301223 --> 10.5.13.138:6979/3258661 -- ping magic: 0 v1 -- ?+0 0x7f2b29a6c6c0 con 0x7f325cbce410
2016-09-06 13:33:42.137489 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1240 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 3145728~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325ddc6720 con 0x7f2b5c8888d0
2016-09-06 13:33:42.139334 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.132:6860/2295337 -- osd_op(client.843865.0:1241 161.ac986a58 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_1 [write 3670016~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325edc6670 con 0x7f2b5c8888d0
2016-09-06 13:33:42.141202 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1242 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [writefull 0~0] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b29930e80 con 0x7f3145e42bd0
2016-09-06 13:33:42.141301 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1243 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [writefull 0~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325e3caee0 con 0x7f3145e42bd0
2016-09-06 13:33:42.143169 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1244 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 524288~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b29608d10 con 0x7f3145e42bd0
2016-09-06 13:33:42.144720 7f2b9625c700  1 -- 10.5.13.118:0/1609214345 <== osd.99 10.5.13.138:6979/3258661 5 ==== osd_op_reply(971 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832769.114_myobjects434 [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v7 ==== 222+0+0 (1729489408 0 0) 0x7f2b29f39940 con 0x7f325de8f750
2016-09-06 13:33:42.144828 7f2e13b37700 10 setting object write_tag=fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.843838.11034
2016-09-06 13:33:42.144899 7f2e13b37700 10 cache get: name=default.rgw.data.root+.bucket.meta.6rgw-8client-s3-32m-t1024-r7-3:fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832769.114 : hit (requested=22, cached=23)
2016-09-06 13:33:42.144910 7f2e13b37700 10 cache get: name=default.rgw.data.root+.bucket.meta.6rgw-8client-s3-32m-t1024-r7-3:fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832769.114 : hit (requested=17, cached=23)
2016-09-06 13:33:42.144959 7f2e13b37700  1 -- 10.5.13.118:0/1609214345 --> 10.5.13.136:6821/1470134 -- osd_op(client.843838.0:1051 160.9e201afc .dir.fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832769.114.115 [call rgw.bucket_prepare_op] snapc 0=[] ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3684ef3130 con 0x7f3147579f10
2016-09-06 13:33:42.145044 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1245 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 1048576~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b29b1c120 con 0x7f3145e42bd0
2016-09-06 13:33:42.146946 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1246 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 1572864~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325f1d84d0 con 0x7f3145e42bd0
2016-09-06 13:33:42.148704 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1247 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 2097152~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325d119520 con 0x7f3145e42bd0
2016-09-06 13:33:42.150730 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1248 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 2621440~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f2b28a90b20 con 0x7f3145e42bd0
2016-09-06 13:33:42.152060 7f2772fea700  1 -- 10.5.13.118:0/2195534099 <== osd.49 10.5.13.138:6952/3258610 11 ==== osd_op_reply(1142 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832810.102__shadow_.YKMwT1JnaofaSMnxPU4thW8OMI7q5jF_1 [write 2097152~524288] v2131'7409 uv7409 ondisk = 0) v7 ==== 210+0+0 (2326243920 0 0) 0x7f36873e8590 con 0x7f368510f700
2016-09-06 13:33:42.152580 7f2e2a364700  1 -- 10.5.13.118:0/2195534099 --> 10.5.13.135:6821/1062092 -- osd_op(client.843857.0:1195 161.3de2ab7d fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832810.102__shadow_.YKMwT1JnaofaSMnxPU4thW8OMI7q5jF_3 [write 1572864~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3684ef3130 con 0x7f3144ce6420
2016-09-06 13:33:42.152737 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1249 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 3145728~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325ddc6720 con 0x7f3145e42bd0
2016-09-06 13:33:42.154584 7f2da625c700  1 -- 10.5.13.118:0/3433962517 --> 10.5.13.138:6889/3258686 -- osd_op(client.843865.0:1250 161.19f87a65 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.sNb9aNiCCvvN4k3cR0L-uPc87wclTV-_2 [write 3670016~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f325edc6670 con 0x7f3145e42bd0
2016-09-06 13:33:42.158197 7f27212d2700  1 -- 10.5.13.118:0/3352802102 <== osd.80 10.5.13.118:6939/1588750 13 ==== osd_op_reply(1099 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_1 [write 3145728~524288] v2131'7332 uv7332 ack = 0) v7 ==== 210+0+0 (1437624127 0 0) 0x7f2b2a66b440 con 0x7f325fd81100
2016-09-06 13:33:42.158247 7f27212d2700  1 -- 10.5.13.118:0/3352802102 <== osd.80 10.5.13.118:6939/1588750 14 ==== osd_op_reply(1099 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_1 [write 3145728~524288] v2131'7332 uv7332 ondisk = 0) v7 ==== 210+0+0 (1182319733 0 0) 0x7f2b285afff0 con 0x7f325fd81100
2016-09-06 13:33:42.158569 7f27212d2700  1 -- 10.5.13.118:0/3352802102 <== osd.80 10.5.13.118:6939/1588750 15 ==== osd_op_reply(1100 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_1 [write 3670016~524288] v2131'7333 uv7333 ack = 0) v7 ==== 210+0+0 (120570908 0 0) 0x7f325f4e6730 con 0x7f325fd81100
2016-09-06 13:33:42.158602 7f27212d2700  1 -- 10.5.13.118:0/3352802102 <== osd.80 10.5.13.118:6939/1588750 16 ==== osd_op_reply(1100 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_1 [write 3670016~524288] v2131'7333 uv7333 ondisk = 0) v7 ==== 210+0+0 (350709590 0 0) 0x7f2b29a643e0 con 0x7f325fd81100
2016-09-06 13:33:42.159087 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.132:6827/2295028 -- osd_op(client.843836.0:1129 161.f3a40015 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_3 [write 2621440~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3686b0dd40 con 0x7f325cce7470
2016-09-06 13:33:42.160941 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.132:6827/2295028 -- osd_op(client.843836.0:1130 161.f3a40015 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_3 [write 3145728~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3687e77a30 con 0x7f325cce7470
2016-09-06 13:33:42.162928 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.132:6827/2295028 -- osd_op(client.843836.0:1131 161.f3a40015 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_3 [write 3670016~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3687e77a30 con 0x7f325cce7470
2016-09-06 13:33:42.163632 7f2909536700  1 -- 10.5.13.118:0/1599966606 <== osd.155 10.5.13.118:6947/1588836 13 ==== osd_op_reply(1143 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832772.142__shadow_.WuVlc9WT_M54Px-Gc11g30gvegV26c1_6 [write 3670016~524288] v2131'6997 uv6997 ondisk = 0) v7 ==== 210+0+0 (3418689606 0 0) 0x7f2b6635d280 con 0x7f3687f352d0
2016-09-06 13:33:42.164883 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.135:6821/1062092 -- osd_op(client.843836.0:1132 161.d2cc9965 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_4 [writefull 0~0] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3687e77a30 con 0x7f325d4ea7e0
2016-09-06 13:33:42.164946 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.135:6821/1062092 -- osd_op(client.843836.0:1133 161.d2cc9965 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_4 [writefull 0~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3687e77a30 con 0x7f325d4ea7e0
2016-09-06 13:33:42.166958 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.135:6821/1062092 -- osd_op(client.843836.0:1134 161.d2cc9965 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_4 [write 524288~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3685815ca0 con 0x7f325d4ea7e0
2016-09-06 13:33:42.168849 7f2d94a39700  1 -- 10.5.13.118:0/3352802102 --> 10.5.13.135:6821/1062092 -- osd_op(client.843836.0:1135 161.d2cc9965 fdc7c472-a17a-4ed2-8ca6-a7be58b44cbe.832791.148__shadow_.m26cIq0xqQg1okjnIfI8gw63JZP2BsB_4 [write 1048576~524288] snapc 0=[] ack+ondisk+write+known_if_redirected e2131) v7 -- ?+0 0x7f3686b0dd40 con 0x7f325d4ea7e0

Comment 7 karan singh 2016-09-06 14:41:55 UTC
Yes Matt , it's the same scenario.

Comment 12 Kyle Bader 2017-04-05 19:25:27 UTC
I'm hitting this with my Presto workloads as well.

https://gist.github.com/mmgaggle/4f7862fdace69d61e9674b159f226ba3

Comment 13 Harish NV Rao 2017-07-28 12:04:56 UTC
@Matt, will this be fixed in 3.0? Please let us know.

Comment 14 Harish NV Rao 2017-08-08 12:59:15 UTC
(In reply to Harish NV Rao from comment #13)
> @Matt, will this be fixed in 3.0? Please let us know.

@Matt, a gentler reminder.

Comment 15 Matt Benjamin (redhat) 2017-08-08 13:22:45 UTC
(In reply to Harish NV Rao from comment #13)
> @Matt, will this be fixed in 3.0? Please let us know.

I think there is no specific RGW issue this tracks, rather, this log message is one of the two places where RGW detects that a client has timed out.

I'll check with Kyle on whether we should close this, or keep it open as a tracker for perf work, which is what we have been doing.

Matt

Comment 16 Kyle Bader 2017-08-16 18:10:12 UTC
It's simply a log message for when the RGW detects that the client has closed the connection before the request has finished. This could be due to a timeout in the application, say cosbench, or at the proxy in front of RGW, like HAproxy. There are also tools that send range requests larger than what they actually want, and close the connection after they get what they really want (subset of the range request) - I'm looking at you Presto.

This can be closed.