Bug 880559

Summary: RHEVM-CLI: Update prompt status upon "Connection failure"
Product: Red Hat Enterprise Virtualization Manager Reporter: Ilia Meerovich <iliam>
Component: ovirt-engine-cliAssignee: Michael Pasternak <mpastern>
Status: CLOSED ERRATA QA Contact: Ilia Meerovich <iliam>
Severity: low Docs Contact:
Priority: unspecified    
Version: 3.2.0CC: aburden, bazulay, cboyle, dyasny, ecohen, hateya, iheim, oramraz, Rhev-m-bugs, sgrinber, ykaul
Target Milestone: ---   
Target Release: 3.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: infra
Fixed In Version: SF4 Doc Type: Bug Fix
Doc Text:
If a connection failed, an incorrect error message was displayed -'unknown error: [ERROR]::Connection failure, [Errno 111] Connection refused'. This has been changed so the correct status is displayed in the error message.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-06-10 20:28:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 915537    

Description Ilia Meerovich 2012-11-27 10:28:46 UTC
Please take a look:

[imeerovi@imeerovi ART]$ /usr/bin/rhevm-shell -c -l https://demo-art-dut.qa.lab.tlv.redhat.com:443/api/ -u admin@internal -I
Password: 


 ==========================================
 >>> connected to RHEVM manager 3.1.0.0 <<<
 ==========================================


        
 ++++++++++++++++++++++++++++++++++++++++++
 
           Welcome to RHEVM shell
 
 ++++++++++++++++++++++++++++++++++++++++++
        
    
[RHEVM shell (connected)]# list clusters --query "name=RestCluster*" --max=-1 --case_sensitive=true

id         : a005ce0e-2d76-11e2-a9b2-001a4a236101
name       : RestCluster1

id         : b3f8b124-2d76-11e2-a46f-001a4a236101
name       : RestCluster4

id         : bf4a1c70-2d76-11e2-9782-001a4a236101
name       : RestCluster5

id         : c0ef5af4-2d76-11e2-a22e-001a4a236101
name       : RestCluster6

id         : c3bb6a2a-2d76-11e2-80cc-001a4a236101
name       : RestCluster7
...
(reboot of rhevm machine)
...
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# 
[RHEVM shell (connected)]# list clusters --query "name=RestCluster*" --max -1 --case_sensitive true

unknown error: [ERROR]::Connection failure, [Errno 111] Connection refused

[RHEVM shell (connected)]#

Comment 2 Haim 2012-11-28 08:30:44 UTC
(In reply to comment #1)
> there is nothing "high" about this bug, it's not blocking/breaking any
> tests/functionality, it's only "status" in the prompt (btw is works
> perfectly well with the backend api which is stateless)

Michael - it looks bad and its a very bad UX, high is not only for functional bugs, certainly, low is not the case here.
if you really insist, we can ask PM for their opinion here.

Comment 3 Michael Pasternak 2012-12-13 16:54:59 UTC
http://gerrit.ovirt.org/10038

Comment 4 Ilia Meerovich 2013-02-05 09:34:31 UTC
[RHEVM shell (connected)]# update cluster Default --data_center-id 36e3e8ae-48f9-11e2-85c4-001a4a169715 --scheduling_policy-policy 'power_saving' --scheduling_policy-thresholds-high 60 --scheduling_policy-thresholds-duration 240 --scheduling_policy-thresholds-low -1 --cpu-id 'AMD Opteron G3'

error: 
status: 503
reason: Service Temporarily Unavailable
detail: 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Temporarily Unavailable</title>
</head><body>
<h1>Service Temporarily Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
<hr>
<address>Apache/2.2.22 (Red Hat Enterprise Web Server) Server at leonid_rhevm.qa.lab.tlv.redhat.com Port 443</address>
</body></html>


[RHEVM shell (connected)]# 

[imeerovi@imeerovi ART]$ rpm -q rhevm-cli
rhevm-cli-3.2.0.3-1.el6ev.noarch

Comment 5 Michael Pasternak 2013-02-05 09:50:19 UTC
(In reply to comment #4)
> [RHEVM shell (connected)]# update cluster Default --data_center-id
> 36e3e8ae-48f9-11e2-85c4-001a4a169715 --scheduling_policy-policy
> 'power_saving' --scheduling_policy-thresholds-high 60
> --scheduling_policy-thresholds-duration 240
> --scheduling_policy-thresholds-low -1 --cpu-id 'AMD Opteron G3'
> 
> error: 
> status: 503
> reason: Service Temporarily Unavailable
> detail: 
> <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>503 Service Temporarily Unavailable</title>
> </head><body>
> <h1>Service Temporarily Unavailable</h1>
> <p>The server is temporarily unable to service your
> request due to maintenance downtime or capacity
> problems. Please try again later.</p>
> <hr>
> <address>Apache/2.2.22 (Red Hat Enterprise Web Server) Server at
> leonid_rhevm.qa.lab.tlv.redhat.com Port 443</address>
> </body></html>
> 
> 
> [RHEVM shell (connected)]# 
> 
> [imeerovi@imeerovi ART]$ rpm -q rhevm-cli
> rhevm-cli-3.2.0.3-1.el6ev.noarch

this is *not correct*, [503::Service Temporarily Unavailable] is not a disconnected state, it's only means that server is busy and can't answer
right now,

to verify this bug you should make rest-api not available, rather by switching off jboss container or by un-deploying rest-api application or making api not reachable by any other way.

Comment 6 Ilia Meerovich 2013-02-05 09:57:34 UTC
i did  /etc/init.d/ovirt-engine stop, than i did ovirt-engine start and you see the result

Comment 7 Michael Pasternak 2013-02-05 10:13:56 UTC
(In reply to comment #6)
> i did  /etc/init.d/ovirt-engine stop, than i did ovirt-engine start and you
> see the result

you ran your test before service went down, i.e it was in a shutdown state,
this why you saw jboss reply [503::Service Temporarily Unavailable], - what is a
valid state from the shell PoV,

please rerun your test waiting few moments after you shut down the ovirt-engine
process.

Comment 8 Ilia Meerovich 2013-02-07 12:23:05 UTC
[RHEVM shell (connected)]# list clusters 
send: 'GET /api/clusters;case_sensitive=True HTTP/1.1\r\nHost: leonid_rhevm.qa.lab.tlv.redhat.com\r\nAccept-Encoding: identity\r\nFilter: False\r\nPrefer: persistent-auth\r\ncookie: JSESSIONID=3lY7jmzWUmndHZpIQiQfM8+1\r\nContent-type: application/xml\r\nAuthorization: Basic YWRtaW5AaW50ZXJuYWw6MTIzNDU2\r\n\r\n'
reply: 'HTTP/1.1 503 Service Temporarily Unavailable\r\n'
header: Date: Thu, 07 Feb 2013 12:21:13 GMT
header: Content-Length: 447
header: Connection: close
header: Content-Type: text/html; charset=iso-8859-1

error: 
status: 503
reason: Service Temporarily Unavailable
detail: 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Temporarily Unavailable</title>
</head><body>
<h1>Service Temporarily Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
<hr>
<address>Apache/2.2.22 (Red Hat Enterprise Web Server) Server at leonid_rhevm.qa.lab.tlv.redhat.com Port 443</address>
</body></html>


[RHEVM shell (connected)]# list clusters 
send: 'GET /api/clusters;case_sensitive=True HTTP/1.1\r\nHost: leonid_rhevm.qa.lab.tlv.redhat.com\r\nAccept-Encoding: identity\r\nFilter: False\r\nPrefer: persistent-auth\r\ncookie: JSESSIONID=3lY7jmzWUmndHZpIQiQfM8+1\r\nContent-type: application/xml\r\nAuthorization: Basic YWRtaW5AaW50ZXJuYWw6MTIzNDU2\r\n\r\n'
reply: 'HTTP/1.1 503 Service Temporarily Unavailable\r\n'
header: Date: Thu, 07 Feb 2013 12:21:13 GMT
header: Content-Length: 447
header: Connection: close
header: Content-Type: text/html; charset=iso-8859-1

error: 
status: 503
reason: Service Temporarily Unavailable
detail: 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Temporarily Unavailable</title>
</head><body>
<h1>Service Temporarily Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
<hr>
<address>Apache/2.2.22 (Red Hat Enterprise Web Server) Server at leonid_rhevm.qa.lab.tlv.redhat.com Port 443</address>
</body></html>


[RHEVM shell (connected)]# list clusters 

error: [Errno 104] Connection reset by peer

[RHEVM shell (disconnected)]# list clusters 

error: [Errno 111] Connection refused

[RHEVM shell (disconnected)]# list clusters 

error: [Errno 111] Connection refused

[RHEVM shell (disconnected)]# list clusters 

error: [Errno 111] Connection refused

[RHEVM shell (disconnected)]# list clusters 
send: 'GET /api/clusters;case_sensitive=True HTTP/1.1\r\nHost: leonid_rhevm.qa.lab.tlv.redhat.com\r\nAccept-Encoding: identity\r\nFilter: False\r\nPrefer: persistent-auth\r\ncookie: JSESSIONID=3lY7jmzWUmndHZpIQiQfM8+1\r\nContent-type: application/xml\r\nAuthorization: Basic YWRtaW5AaW50ZXJuYWw6MTIzNDU2\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Thu, 07 Feb 2013 12:26:24 GMT
header: Pragma: No-cache
header: Cache-Control: no-cache
header: Expires: Thu, 01 Jan 1970 02:00:00 IST
header: Set-Cookie: JSESSIONID=BCTunyHZbZ6ABs5lBpumo6cJ; Path=/api; Secure
header: Content-Type: application/xml
header: Content-Length: 1354
header: Connection: close
body:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<clusters>
    <cluster href="/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95" id="99408929-82cf-4dc7-a532-9d998063fa95">
        <name>Default</name>
        <description>The default server cluster</description>
        <link href="/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks" rel="networks"/>
        <link href="/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/permissions" rel="permissions"/>
        <link href="/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes" rel="glustervolumes"/>
        <cpu id="AMD Opteron G3"/>
        <data_center href="/api/datacenters/36e3e8ae-48f9-11e2-85c4-001a4a169715" id="36e3e8ae-48f9-11e2-85c4-001a4a169715"/>
        <memory_policy>
            <overcommit percent="100"/>
            <transparent_hugepages>
                <enabled>true</enabled>
            </transparent_hugepages>
        </memory_policy>
        <scheduling_policy>
            <policy>power_saving</policy>
            <thresholds low="1" high="60" duration="240"/>
        </scheduling_policy>
        <version major="3" minor="1"/>
        <error_handling>
            <on_error>migrate</on_error>
        </error_handling>
        <virt_service>true</virt_service>
        <gluster_service>false</gluster_service>
    </cluster>
</clusters>


id         : 99408929-82cf-4dc7-a532-9d998063fa95
name       : Default
description: The default server cluster

Comment 9 Andrew Burden 2013-05-09 02:58:02 UTC
This bug is currently attached to errata RHBA-2013:14365-22. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:
https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes

Thanks in advance,
Andrew

Comment 10 errata-xmlrpc 2013-06-10 20:28:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0890.html