Bug 1850594

Summary: Beast Logs: The logs are very thin and do not contain much information in Beast HTTP Frontend.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Avi Mor <avmor>
Component: RGWAssignee: Mark Kogan <mkogan>
Status: CLOSED DUPLICATE QA Contact: Tejas <tchandra>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.1CC: aavraham, cbodley, ceph-eng-bugs, kbader, mbenjamin, nchilaka, sweil
Target Milestone: rc   
Target Release: 5.*   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-06 14:14:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Avi Mor 2020-06-24 14:02:33 UTC
Description of problem:
The logs are very thin and do not contain much information in Beast HTTP Frontend.

Version-Release number of selected component (if applicable):

RHCS 4.1

How reproducible:

tail -f /var/log/ceph/ceph-rgw*

Steps to Reproduce:

1. Upload objects to S3.
2. cd /var/log/ceph/
3. tail -f /var/log/ceph/ceph-rgw*

Actual results:

2020-06-24 13:14:12.747 7f9930681700  1 ====== starting new request req=0x7f9a6b1f28a0 =====
2020-06-24 13:14:12.778 7f997b717700  1 ====== req done req=0x7f9a6b1f28a0 op status=0 http_status=200 latency=0.0310002s ======
2020-06-24 13:14:12.789 7f9973f08700  1 ====== starting new request req=0x7f9a6b1f28a0 =====
2020-06-24 13:14:12.819 7f99ec7f9700  1 ====== req done req=0x7f9a6b1f28a0 op status=0 http_status=200 latency=0.0300002s ======
2020-06-24 13:14:12.830 7f9963ee8700  1 ====== starting new request req=0x7f9a6b1f28a0 =====
2020-06-24 13:14:12.854 7f9a2506a700  1 ====== req done req=0x7f9a6b1f28a0 op status=0 http_status=200 latency=0.0240002s ======
2020-06-24 13:14:12.863 7f995eede700  1 ====== starting new request req=0x7f9a6b1f28a0 =====

Expected results:

Logs in CivetWeb. 


2020-06-24 13:58:23.004 7f9dab89d700  1 ====== starting new request req=0x7f9dab895170 =====
2020-06-24 13:58:23.038 7f9dab89d700  1 ====== req done req=0x7f9dab895170 op status=0 http_status=200 latency=0.0340004s ======
2020-06-24 13:58:23.038 7f9dab89d700  1 civetweb: 0x556116784000: 192.168.44.1 - - [24/Jun/2020:13:58:20 +0000] "PUT /platinum/62e4b4d4-9b58-4812-ba24-886da1539b42 HTTP/1.1" 200 323 - Boto3/1.13.6 Python/3.8.2 Linux/3.10.0-1062.el7.x86_64 Botocore/1.16.6
2020-06-24 13:58:23.056 7f9dab89d700  1 ====== starting new request req=0x7f9dab895170 =====
2020-06-24 13:58:23.100 7f9dab89d700  1 ====== req done req=0x7f9dab895170 op status=0 http_status=200 latency=0.0440005s ======
2020-06-24 13:58:23.100 7f9dab89d700  1 civetweb: 0x556116784000: 192.168.44.1 - - [24/Jun/2020:13:58:20 +0000] "PUT /platinum/b37729e2-7849-49de-ac5f-65bdeb149ffc HTTP/1.1" 200 323 - Boto3/1.13.6 Python/3.8.2 Linux/3.10.0-1062.el7.x86_64 Botocore/1.16.6


Additional info:

Comment 3 Avi Avraham 2020-07-06 13:40:24 UTC
That's great news, thanks for the update

Comment 4 Matt Benjamin (redhat) 2020-07-06 14:14:52 UTC

*** This bug has been marked as a duplicate of bug 1845086 ***

Comment 5 Red Hat Bugzilla 2023-09-14 06:02:47 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days