Bug 987494 - [RHSC] Can not remove host when host is in Maintenance [NEEDINFO]
[RHSC] Can not remove host when host is in Maintenance
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
Unspecified Unspecified
medium Severity high
: ---
: RHGS 2.1.2
Assigned To: Timothy Asir
Matt Mahoney
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2013-07-23 09:51 EDT by Matt Mahoney
Modified: 2015-05-13 12:28 EDT (History)
11 users (show)

See Also:
Fixed In Version: cb11
Doc Type: Bug Fix
Doc Text:
Previously, an error message was displayed while removing hosts in maintenance mode and the removal was not allowed. Now, with this update, the removal of hosts in maintenance mode is allowed.
Story Points: ---
Clone Of:
Last Closed: 2014-02-25 02:33:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sharne: needinfo? (tjeyasin)

Attachments (Terms of Use)
Screen Shot (23.32 KB, image/png)
2013-07-23 10:05 EDT, Matt Mahoney
no flags Details
Log file (1.60 MB, application/x-tar)
2013-08-13 09:52 EDT, Matt Mahoney
no flags Details

  None (edit)
Description Matt Mahoney 2013-07-23 09:51:16 EDT
Description of problem:
While attempting to remove a host that is in Maintenance, the following error occurs: 

"Cannot remove Host. Related operation is already in progress."

Note: This does not happen on every remove attempt, and subsequent attempts to move are typically successful.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Add host and move it to Up status.
2. Move host to Maintenance
3. Remove host

Actual results:
Unable to remove host that is in maintenance.

Expected results:
Once host is in Maintenance state, should be able to remove the host.

Additional info:
Comment 1 Matt Mahoney 2013-07-23 10:05:48 EDT
Created attachment 777334 [details]
Screen Shot
Comment 3 Matt Mahoney 2013-08-13 09:52:23 EDT
Created attachment 786170 [details]
Log file

Bug encountered during automated overnight run.
Comment 5 Dusmant 2013-10-25 06:25:42 EDT
This can not be permanently fixed. Only the interval of polling can be increased to avoid the locking probability to a large extent. But that would have some GUI refresh not happening for that interval. We will try to increase it from 5 sec to 30 sec and see.
Comment 6 Sahina Bose 2013-11-13 01:09:01 EST
I'm not sure if increasing the polling interval is the right way to fix this. This would result in other issues like volume status/new volumes/ new bricks are not synced in engine immediately.

Can the automation tests be changed to introduce a delay before 2 operations?
Comment 7 Timothy Asir 2013-11-29 07:27:53 EST
Sent patch to upstream: http://gerrit.ovirt.org/#/c/21865
Comment 8 Timothy Asir 2013-11-29 07:42:12 EST
Added proper locking mechanism for backend (gluster command) execution to handle this situation.
Comment 9 Matt Mahoney 2013-12-18 16:50:08 EST
Verified in cb12.
Comment 10 Shalaka 2014-01-06 06:40:30 EST
Please review the edited DocText.
Comment 12 errata-xmlrpc 2014-02-25 02:33:36 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.