Bug 987494 - [RHSC] Can not remove host when host is in Maintenance
Summary: [RHSC] Can not remove host when host is in Maintenance
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: Timothy Asir
QA Contact: Matt Mahoney
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-23 13:51 UTC by Matt Mahoney
Modified: 2023-09-14 01:48 UTC (History)
11 users (show)

Fixed In Version: cb11
Doc Type: Bug Fix
Doc Text:
Previously, an error message was displayed while removing hosts in maintenance mode and the removal was not allowed. Now, with this update, the removal of hosts in maintenance mode is allowed.
Clone Of:
Environment:
Last Closed: 2014-02-25 07:33:36 UTC
Embargoed:


Attachments (Terms of Use)
Screen Shot (23.32 KB, image/png)
2013-07-23 14:05 UTC, Matt Mahoney
no flags Details
Log file (1.60 MB, application/x-tar)
2013-08-13 13:52 UTC, Matt Mahoney
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC

Description Matt Mahoney 2013-07-23 13:51:16 UTC
Description of problem:
While attempting to remove a host that is in Maintenance, the following error occurs: 

"Cannot remove Host. Related operation is already in progress."

Note: This does not happen on every remove attempt, and subsequent attempts to move are typically successful.

Version-Release number of selected component (if applicable):
bb6

How reproducible:


Steps to Reproduce:
1. Add host and move it to Up status.
2. Move host to Maintenance
3. Remove host

Actual results:
Unable to remove host that is in maintenance.

Expected results:
Once host is in Maintenance state, should be able to remove the host.

Additional info:

Comment 1 Matt Mahoney 2013-07-23 14:05:48 UTC
Created attachment 777334 [details]
Screen Shot

Comment 3 Matt Mahoney 2013-08-13 13:52:23 UTC
Created attachment 786170 [details]
Log file

Bug encountered during automated overnight run.

Comment 5 Dusmant 2013-10-25 10:25:42 UTC
This can not be permanently fixed. Only the interval of polling can be increased to avoid the locking probability to a large extent. But that would have some GUI refresh not happening for that interval. We will try to increase it from 5 sec to 30 sec and see.

Comment 6 Sahina Bose 2013-11-13 06:09:01 UTC
I'm not sure if increasing the polling interval is the right way to fix this. This would result in other issues like volume status/new volumes/ new bricks are not synced in engine immediately.

Can the automation tests be changed to introduce a delay before 2 operations?

Comment 7 Timothy Asir 2013-11-29 12:27:53 UTC
Sent patch to upstream: http://gerrit.ovirt.org/#/c/21865

Comment 8 Timothy Asir 2013-11-29 12:42:12 UTC
Added proper locking mechanism for backend (gluster command) execution to handle this situation.

Comment 9 Matt Mahoney 2013-12-18 21:50:08 UTC
Verified in cb12.

Comment 10 Shalaka 2014-01-06 11:40:30 UTC
Please review the edited DocText.

Comment 12 errata-xmlrpc 2014-02-25 07:33:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Comment 13 Red Hat Bugzilla 2023-09-14 01:48:12 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.