Bug 1383728

Summary: [RHCS 2] RGW goes into loop causing 100% CPU utilization
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ken Dreyer (Red Hat) <kdreyer>
Component: RGWAssignee: Marcus Watts <mwatts>
Status: CLOSED ERRATA QA Contact: shilpa <smanjara>
Severity: high Docs Contact:
Priority: unspecified    
Version: 2.1CC: cbodley, ceph-eng-bugs, ceph-qe-bugs, hnallurv, jbautist, kbader, kdreyer, linuxkidd, mbenjamin, mhernon, mwatts, owasserm, smanjara, sweil, tmuthami, tserlin
Target Milestone: rc   
Target Release: 2.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-10.2.3-9.el7cp Ubuntu: ceph_10.2.3-10redhat1xenial Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1380196 Environment:
Last Closed: 2016-11-22 19:32:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
recreate test case using curl
none
recreate test case using python-swiftclient
none
Create subuser for swift, and recreate test case using curl
none
recreate test case using pythint swiftclient
none
running swift tests w/o teuthology
none
running s3-tests without teuthology none

Description Ken Dreyer (Red Hat) 2016-10-11 15:31:04 UTC
+++ This bug was initially created as a clone of Bug #1380196 +++

Description of problem:

  Virtualized RGW host goes into loop causing 100% CPU utilization.

Version-Release number of selected component (if applicable):

  ceph-radosgw-0.94.5-12.el7cp.x86_64
  RHEL 7.2 servers running OpenStack Kilo with Ceph Storage Backend
===========================================================================

Customer reports that after setting up 3 virtualized RGW nodes several months ago & during testing they've noticed that all 3 are running at 100% CPU (95% of the time) even though they don'think they are being heavily used. We requested RGW debug logs &  they are attached to case 01705883 as 'client.radosgw.log.gz'  

The customer also provided the following details when asked for a specific timeframe to focus on:

"Between 10:10 & 10:20 AM . And it was not using ALL CPU's, but it was using more than 50% (which should have been the limit) and if we had left it alone it would have eventually used all the CPU's. Below is the partial output from sar for that timeframe ...

09:50:01 AM     all      0.07      0.00      0.04      0.00      0.00     99.89
10:00:01 AM     all      0.08      0.00      0.06      0.00      0.00     99.86
10:10:01 AM     all      0.31      0.00      0.22      0.00      0.00     99.48
10:20:01 AM     all     29.40      0.00     30.32      0.05      0.04     40.20
10:30:01 AM     all      9.51      0.00     13.63      0.01      0.01     76.84"

==================================================================================== 

Log analysis of 'log entries per second' ramps up quickly starting at the 10:09:41 mark in the debug log, and appears to go into a loop around that time.

Comment 3 Marcus Watts 2016-10-13 02:33:45 UTC
Created attachment 1209899 [details]
recreate test case using curl

Comment 4 Marcus Watts 2016-10-13 02:34:34 UTC
Created attachment 1209900 [details]
recreate test case using python-swiftclient

Comment 13 Marcus Watts 2016-10-15 07:57:51 UTC
Created attachment 1210744 [details]
Create subuser for swift, and recreate test case using curl

Comment 14 Marcus Watts 2016-10-15 07:59:08 UTC
Created attachment 1210746 [details]
recreate test case using pythint swiftclient

Comment 20 Marcus Watts 2016-10-20 07:09:07 UTC
Created attachment 1212371 [details]
running swift tests w/o teuthology

Comment 21 Marcus Watts 2016-10-20 07:09:46 UTC
Created attachment 1212372 [details]
running s3-tests without teuthology

Comment 25 shilpa 2016-10-28 11:58:08 UTC
Used the Large objects upload script to test. Didn't find radosgw CPU utilization going above 6% while uploading/downloading the objects

Comment 27 errata-xmlrpc 2016-11-22 19:32:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html