Bug 1669838 - [RFE] Including some rgw bits in mgr-restful plugin
Summary: [RFE] Including some rgw bits in mgr-restful plugin
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Mgr Plugins
Version: 3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 3.3
Assignee: Boris Ranto
QA Contact: Madhavi Kasturi
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1726135
TreeView+ depends on / blocked
 
Reported: 2019-01-27 13:14 UTC by Servesha
Modified: 2023-03-24 14:31 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-12.2.12-11.el7cp Ubuntu: ceph_12.2.12-11redhat1
Doc Type: Enhancement
Doc Text:
.The RESTful plug-in now exposes performance counters Th RESTful plug-in for the Ceph Manager (`ceph-mgr`) now exposes performance counters that include a number of Ceph Object Gateway metrics. To query the performance counters through the REST API provided by the RESTful plug-in, access the `/perf` endpoint.
Clone Of:
Environment:
Last Closed: 2019-08-21 15:10:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:2538 0 None None None 2019-08-21 15:10:42 UTC

Description Servesha 2019-01-27 13:14:15 UTC
Description of problem: 'ceph health' cannot be seen in mgr restful module.


Version-Release number of selected component (if applicable): RHCS version 3.2


How reproducible: always


Steps to Reproduce:
1. Enable and configure mgr restful module
2. Try to access mgr restful module on a browser using url : https://<ceph-mgr>:8003/config/cluster


Actual results: 'ceph health' information is not available in mgr restful module. 


Expected results: 'ceph health' should be there in ceph cluster information.


Additional info: you cannot see 'ceph health' in mgr restful but in ceph-rest-api you can. ceph-rest-api is deprecated in RHCS version 3.2.

Comment 1 Boris Ranto 2019-01-28 17:46:31 UTC
Hi Servensha,

the ceph-rest-api was superseded by the /request endpoint in the restful module. You can get more information about the endpoint by reading the upstream documentation

http://docs.ceph.com/docs/master/mgr/restful/#the-request-endpoint

or reading through this thread on ceph-devel:

https://marc.info/?l=ceph-devel&m=154773125114644&w=2

Basically, what you need to do is this:

[root@node2 ~]# python
Python 2.7.5 (default, Jul 13 2018, 13:06:57) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> _auth=('admin', '...')
>>> result = requests.post('https://node2:8003/request?wait=1', json={'prefix': 'health'}, auth=_auth, verify=False)
>>> print result.json()['finished'][0]['outb']
u'HEALTH_OK\n'
>>> 

Following the mailing thread on ceph-devel should give you more insight on how this works and what is going on when you use the endpoint.

Comment 2 Servesha 2019-03-05 09:53:28 UTC
Hello,

Customer wants to have exact same functions in mgr-resful-module which is present in deprecated ceph-rest-api. 
The following things related to rgw are not included in mgr-restful-module:

rgw successful requests 
rgw failed requests 
rgw performance details (read/write iops, latency, throughput etc)
rgw active connections 
rgw active thread number 

In mgr-module we can see the information about particular request using url => https://10.74.255.33:8003/request and https://10.74.255.33:8003/request/<id> , but we are unable to see the set of rgw failed request, rgw successful requests, rgw active thread number , connections and rgw performance details.

The information about other arguments regarding rgw can be seen using url =>  https://10.74.255.33:8003/config/cluster

But as we can see information related to monitor and osd separately using url => https://10.74.255.33:8003/mon and https://10.74.255.33:8003/osd respectively. Is there any way in mgr-module which gives information regarding rgw about above mentioned points?

Also cu mentioned that crushmap structure is present in ceph-rest-api. We can se info about crush map using => https://10.74.255.33:8003/crush/rule but it gives output like below :

0	
max_size	10
min_size	1
osd_count	6
rule_id	0
rule_name	"replicated_rule"
ruleset	0
steps	
0	
item	-1
item_name	"default"
op	"take"
1	
num	0
op	"chooseleaf_firstn"
type	"host"
2	
op	"emit"
type	1

Can we have whole crush map structure instead of only this much of info?

The functions which cu mentioned present in ceph-rest-api : 
Df - the ability of viewing cluster wise consumption(used, available, overall)
Pg stats - viewing the pg stats (state, number etc)
Features - monitor clients connected to ceph cluster 
Health  - monitor cluster's health
Mon_status - see monitor status (quorum, clock sqew, daemon status etc)
Nodes ls - view nodes participating in the cluster 
Device classes  - viewing device classes 
Crush dump - crushmap structure 
Osd df - the ability of viewing disk wise consumption(used, available, overall)
Osd down - viewing down osds 
Osd perf - viewing performance details on osds(apply latency, commit latency, request latency, read/write iops, latency, throughput etc etc)
Osd perf histogram- same 
Osd tree - osd tree structure 
Pool ls - viewing pools 
Pool stats - performance details upon pools (read/write iops, latency, throughput etc)
Daemons (mon/mgr/osd/rgw daemon status)

My question is do I have to raise an RFE to have some features from ceph-rest-api to mgr-restful-module. It seems there is some gap between ceph-rest-api and mgr-module. Please let me know.

Thank you!

Best regards,
Servesha

Comment 3 Servesha 2019-03-05 09:55:22 UTC
Hello, 

I followed below doc to refer to mgr-module. 

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_management_api/questions-and-answers

Thanks

Best regards,
Servesha

Comment 4 Servesha 2019-04-11 08:45:49 UTC
Hello,

I had a call with the customer. There are some major gaps between ceph-rest-api and mgr-module which we have to fill up. Customer has already upgraded to RHCS 3.2. He is still on ceph-rest-api because, there are gaps and so that he cannot move to mgr-module. ceph-rest-api is deprecated in RHCS 3.2. 

We cannot really force the customer to use mgr-module also can't provide support exception. Their requirement can't be fullfilled ; mgr-module doesn't fetch as much information as ceph-rest-api. 


I have already mentioned detailed information about what needs to be added in mgr-module. [ Please refer to Comment #2 ]

Could we please have functionalities as it was present in ceph-rest-api ?

Kind Regards,
Servesha

Comment 5 Boris Ranto 2019-04-11 11:18:37 UTC
(In reply to Servesha from comment #2)
> Hello,
> 

Hey,

> Customer wants to have exact same functions in mgr-resful-module which is
> present in deprecated ceph-rest-api. 
> The following things related to rgw are not included in mgr-restful-module:
> 
> rgw successful requests 
> rgw failed requests 
> rgw performance details (read/write iops, latency, throughput etc)
> rgw active connections 
> rgw active thread number 
> 
> In mgr-module we can see the information about particular request using url
> => https://10.74.255.33:8003/request and
> https://10.74.255.33:8003/request/<id> , but we are unable to see the set of
> rgw failed request, rgw successful requests, rgw active thread number ,
> connections and rgw performance details.
> 
> The information about other arguments regarding rgw can be seen using url =>
> https://10.74.255.33:8003/config/cluster
> 
> But as we can see information related to monitor and osd separately using
> url => https://10.74.255.33:8003/mon and https://10.74.255.33:8003/osd
> respectively. Is there any way in mgr-module which gives information
> regarding rgw about above mentioned points?
> 


You are right they are not available in the restful module. The prometheus exporter module should provide these (and more) though. This could be an RFE. However, please note that depending on what the customer is trying to achieve, they could be more or less successful if they used the prometheus exporter module (and a prometheus server) instead of the restful module.


> Also cu mentioned that crushmap structure is present in ceph-rest-api. We
> can se info about crush map using => https://10.74.255.33:8003/crush/rule
> but it gives output like below :
> 
> 0	
> max_size	10
> min_size	1
> osd_count	6
> rule_id	0
> rule_name	"replicated_rule"
> ruleset	0
> steps	
> 0	
> item	-1
> item_name	"default"
> op	"take"
> 1	
> num	0
> op	"chooseleaf_firstn"
> type	"host"
> 2	
> op	"emit"
> type	1
> 
> Can we have whole crush map structure instead of only this much of info?
> 

Is this what they were looking for?

[root@node2 ~]# python
Python 2.7.5 (default, Jul 13 2018, 13:06:57) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> _auth=('admin', '...')
>>> result = requests.post('https://node2:8003/request?wait=1', json={'prefix': 'osd crush dump'}, auth=_auth, verify=False)
>>> print result.json()['finished'][0]['outb']
...

If not, then there might be other prefixes like 'osd crush rule dump' or 'osd crush rule ls' that they are looking for.

> The functions which cu mentioned present in ceph-rest-api : 
> Df - the ability of viewing cluster wise consumption(used, available,
> overall)
> Pg stats - viewing the pg stats (state, number etc)
> Features - monitor clients connected to ceph cluster 
> Health  - monitor cluster's health
> Mon_status - see monitor status (quorum, clock sqew, daemon status etc)
> Nodes ls - view nodes participating in the cluster 
> Device classes  - viewing device classes 
> Crush dump - crushmap structure 
> Osd df - the ability of viewing disk wise consumption(used, available,
> overall)
> Osd down - viewing down osds 
> Osd perf - viewing performance details on osds(apply latency, commit
> latency, request latency, read/write iops, latency, throughput etc etc)
> Osd perf histogram- same 
> Osd tree - osd tree structure 
> Pool ls - viewing pools 
> Pool stats - performance details upon pools (read/write iops, latency,
> throughput etc)
> Daemons (mon/mgr/osd/rgw daemon status)
> 


These are also available in the restful module, right? (although they might be available through the /request endpoint)


> My question is do I have to raise an RFE to have some features from
> ceph-rest-api to mgr-restful-module. It seems there is some gap between
> ceph-rest-api and mgr-module. Please let me know.
> 


You can raise an RFE, especially for the RGW bits.


> Thank you!
> 
> Best regards,
> Servesha

Comment 6 Servesha 2019-04-12 08:38:28 UTC
Hello Boris,

>> These are also available in the restful module, right? (although they might be available through the /request endpoint)

-  Yeah I'm sure about some features availability in mgr restful plugin like 'ceph health'. But Everything is not available. For Ex : Osd perf histogram, pool stats, viewing down osds, ceph osd tree, pg stats  - not there in mgr restful. If there are some ways to have that information in mgr restful, please let me know. 

===

In the doc I could not see prefixes such as - 'osd crush rule dump' or 'osd crush rule ls'. The doc mentions information about 'crush rule'. please refer to doc link [1].

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_management_api/questions-and-answers#how-can-i-view-crush-rules

===

For rgw bits, I have raised a RFE - https://bugzilla.redhat.com/show_bug.cgi?id=1699233

===

Best Regards,
Servesha

Comment 7 Boris Ranto 2019-04-29 14:33:51 UTC
What endpoint/command does the customer use to get the rgw data mentioned above? I have been looking into the ceph-rest-api source code and there is nothing special regarding rgw handling. That suggests, the information the customer is looking for might already be covered by the restful's /request endpoint. We just need to figure out what command (prefix) they need to run.

Alternatively, if it really is not covered by the current API, we may want to expose all the perf counters in the restful module. That should help us cover the customer request.

Comment 8 Servesha 2019-04-30 05:42:05 UTC
Hello Boris,

- You're right. rgw related requests are not present in ceph-rest-api. And what covered in newer API is : we can see the information about particular request using url => https://10.74.255.33:8003/request and https://10.74.255.33:8003/request/<id> 

- but we are unable to see the set of rgw failed request, rgw successful requests, rgw active thread number , connections and rgw performance details.

- I think, we should cover things like rgw failed request, rgw successful requests, rgw active thread number , connections and rgw performance details. That will surely cover the customer's request regarding rgw information.

Best Regards,
Servesha

Comment 9 Boris Ranto 2019-04-30 07:10:15 UTC
Hi Servesha,

this time I was asking how they can access the rgw related requests using the deprecated ceph-rest-api module/app? I mean, I don't see this data anywhere in the ceph-rest-api -- at least not directly, they could be using some more advanced/hidden method to get that data though and if they are I would like to know which one/how are they getting that information from the deprecated ceph-rest-api. Or is this really a new feature request? I.e. having this new data exposed in the restful ceph-mgr module while it wasn't exposed in the deprecated ceph-rest-api?

From what I can see if could they access the data before, I could give them steps on how to do it with the current restful module. If they couldn't access it before I could probably make the perf counters available through the new restful module (if upstream would be ok with that). That should cover at least some (if not all) of the information that you mentioned above.

The new rgw data would then look like this:

https://pastebin.com/UTAZ4Wdp

Regards,
Boris

Comment 10 Servesha 2019-04-30 07:48:30 UTC
Hello Boris,

They are not accessing rgw related data using the deprecated ceph-rest-api. Since it's not available in ceph-rest-api. Yes, This is a new feature request. I.e. having this new data exposed in the restful ceph-mgr module while it wasn't exposed in the deprecated ceph-rest-api.

They cannot access rgw related information using either of modules, so they requested for RFE.

https://pastebin.com/UTAZ4Wdp 

The data mentioned in the pastebin covers information for all rgw requests right ? Could we put filters and have data for 1. rgw failed request 2. rgw successful requests etc. 


Best Regards,
Servesha

Comment 11 Boris Ranto 2019-04-30 08:24:56 UTC
This exposes the perf counters as described e.g. here:

http://docs.ceph.com/docs/master/dev/perf_counters/

This is the data that ceph collects internally about its deamons. What it means is that this is per daemon -- each rgw, osd, etc daemon has its own set of perf counters. My current scratch implementation does support filtering by daemons through regexps with the '/perf?daemon=<regexp>' syntax. The resulting data is a simple JSON so any option should be easily accessible algorithmically. This should also allow you to search for the number of active rgw daemons simply by using '/perf?daemon=rgw.*' endpoint and counting the number of returned daemons.

Honestly, I am not sure about the per-counter filters as this would make things fairly slow if you were to gather several pieces of information, here. It would be better to get information about all the daemons that you care about and store and access it as necessary.

For any additional information, this would require setting up a /rgw endpoint that could contain this piece of information and it would be a subject to further research on the feasibility of exposing such an information.

Comment 12 Boris Ranto 2019-04-30 10:28:53 UTC
We should improve the documentation of the /request endpoint for our next release to better cover the way how ceph-rest-api was deprecated by the restful module.

Comment 13 Servesha 2019-05-01 07:01:05 UTC
Hello Boris,

Regarding comment #11 - Agreed. It'd be better to get information about all daemons, and then access/store which we need.

Also Regarding comment #12 - Yes. We will have to make documentation better in order to eliminate module gap conflicts.


Also we can include perf counters (when it's merged), mentioned in the PR. 
https://github.com/ceph/ceph/pull/27885

Best Regards,
Servesha

Comment 14 Servesha 2019-05-30 10:27:49 UTC
Hello Boris,

May I know status of this bug?

Comment 15 Boris Ranto 2019-06-04 16:45:31 UTC
The current status is that the upstream PR was merged, the upstream back-port for Nautilus is currently pending QA. I have back-ported this downstream for 3.3 so that is where you should be able to see the perf counters being exposed by the restful module.

Comment 28 errata-xmlrpc 2019-08-21 15:10:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2538


Note You need to log in before you can comment on or make changes to this bug.