Bug 1460762 - jobs API call returns: Invalid JSON received.
jobs API call returns: Invalid JSON received.
Status: NEW
Product: Red Hat Storage Console
Classification: Red Hat
Component: API (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 3
Assigned To: Anup Nivargi
Depends On:
  Show dependency treegraph
Reported: 2017-06-12 12:13 EDT by Filip Balák
Modified: 2017-06-23 06:21 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
logs from hosts (2.91 MB, application/zip)
2017-06-12 12:13 EDT, Filip Balák
no flags Details

External Trackers
Tracker ID Priority Status Summary Last Updated
Github /Tendrl/api/issues/173 None None None 2017-06-12 12:22 EDT

  None (edit)
Description Filip Balák 2017-06-12 12:13:06 EDT
Created attachment 1287072 [details]
logs from hosts

Description of problem:
I tried to import gluster cluster but job failed. After the fail I checked `hostname/api/1.0/jobs` API call but the response is: `{"errors":{"message":"Invalid JSON received."}}`. This error message remains. The original import job was processing and nearly finished. Unfortunately I do not backup of messages before it started responding with error.

Version-Release number of selected component (if applicable):

How reproducible:
20% Very hard.

Steps to Reproduce:
1. Create import gluster cluster job.
2. Hope it fails.
3. If it fails check response of GET `hostname/api/1.0/jobs` API call

Actual results:
After failed import job the `hostname/api/1.0/jobs` API call returns: `{"errors":{"message":"Invalid JSON received."}}`

Expected results:
`hostname/api/1.0/jobs` API call should return valid response.

Additional info:
Comment 3 Nishanth Thomas 2017-06-20 00:11:36 EDT
Is this behaviour consistent? Or you just seen this once?
Comment 4 Filip Balák 2017-06-20 03:18:05 EDT
I have seen this multiple times. Usually when some job fails. In BZ 1462807 is another reproducer.
Comment 5 Nishanth Thomas 2017-06-20 04:14:27 EDT
Once it fails, it keep failing for ever? Is there any workaround to get back to normal?
Comment 6 Filip Balák 2017-06-20 04:33:15 EDT
Yes, it remains in the failing state. I am not sure if there is a workaround. I tried to delete the job I thought broke it from etcd /queue but it didn't fix the issue.
But API calls /jobs/:job_id: and /jobs/:job_id:/{messages|status|...} works.

Note You need to log in before you can comment on or make changes to this bug.