Bug 1293597 - [rhos-4] Tenant cannot start or stop his instances by the Horizon portal
[rhos-4] Tenant cannot start or stop his instances by the Horizon portal
Status: CLOSED WONTFIX
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-django-horizon (Show other bugs)
4.0
Unspecified Unspecified
unspecified Severity medium
: ---
: 8.0 (Liberty)
Assigned To: Matthias Runge
Ido Ovadia
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-22 05:35 EST by Pablo Caruana
Modified: 2016-04-11 04:43 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-04-11 04:43:34 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Pablo Caruana 2015-12-22 05:35:59 EST
Description of problem:
On one  of  tenants cannot start or stop instances by Horizon portal, this affects only the users from that particular tenant but not when using the python client.

We have observed this directly from screen sharing and confirmed that any start and stop from other  tenant instances located in the same availability zones  all works fine.

We have double checked to start and stop these affected instances by using tje nova api from clients working at .

the horizon portal not show error messages when we execute changes in the afected instances. We have checked with several users related with this tenant and the error is the same.

Version-Release number of selected component (if applicable):


How reproducible:

Just tried to use the stop and start from the dashboard using this particular tenant.
Comment 10 Pablo Caruana 2016-02-23 22:51:38 EST
Not complete sure if this was shared before, but let's documented here.


Customer have been making the tests with the owner user of the original instances and the start/stop task in new instances was executed correctly. 

They  have continued making test and finally we have identified the error.
Their conclusion is  that the error is located in Horizon portal as
The affected project contains at least  26 instances deployed, Horizon is showing the instance view in  2 pages (fist 20 instances, second 6 instances). When tried  to start/stop instances showed in the first page the task is executed correctly. When tried start/stop instances showed in the second paged the task is not executed.

Additional verification tests:

1- created a new instance, this instance was showed the first of all instances in the list so is located in the first page of the instances-view in horizon
2-due to this new instance, one of the old instances was moved to the second page of the instances-view in horizon.
3-Launched  stop operation to the old instance moved to the second page and the task was not executed.
4- removed that new instance so the old instance was moved again to the first page of instances view in horizon.
5-Launched the the start/stop task at the original instance and this task is executed correctly.

Resume:
All instances located in the first page of the instances view can be start/stop correctly by horizon.
The star/stop tasks executed in the other instances located in the others pages of the instances view are not executed correctly.
Comment 15 Itxaka 2016-03-04 05:02:31 EST
The issue seems to come from a wrong use of the url to post to.

On horizon/templates/horizon/common/_data_table.html:

<form action="{{ table.get_absolute_url }}" method="POST">


Where get_absolute_url returns... the absolute url with no query parameters:

return self.request.get_full_path().partition('?')[0]


So the solution would be to apply the fix from upstream at https://review.openstack.org/#/c/77837/.
Comment 22 Mike McCune 2016-03-28 19:09:17 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions

Note You need to log in before you can comment on or make changes to this bug.