Bug 1043047 - [RFE] Storage domain gets activated even if one of the host can't access it
Summary: [RFE] Storage domain gets activated even if one of the host can't access it
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.2.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Nobody
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-13 20:25 UTC by Pratik Pravin Bandarkar
Modified: 2019-11-14 06:23 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-04 08:31:49 UTC
oVirt Team: Storage
Target Upstream Version:
sherold: Triaged+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1422508 0 urgent CLOSED acquire lease Operations which depends on acquire lease, fails with error 'Sanlock resource not acquired', since the acq... 2021-02-22 00:41:40 UTC

Internal Links: 1422508

Description Pratik Pravin Bandarkar 2013-12-13 20:25:00 UTC
Description of problem:
Among all hosts if one host dont have connection with storage and if we try adding storage domain then, storage domain gets added successfully. Further another host which don't have connection with storage, move into non-operational state. 


RHEV should check if all hosts in Data center have access to the storage domain, before adding the storage domain. If any of the host don't have access then, RHEV should throw some error message while adding storage it-self. For ex: "host <hostname> dont have access to the storage".

Or 

storage domain can be added, but activation of new storage domain should fail if one of the host fails to access it. Hypervisor should not go to Non-Operational while trying to activate a
new storage domain.



Version-Release number of selected component (if applicable):
RHEV3.2/3.2

How reproducible:
100%

Steps to Reproduce:
1. Add two hosts.
2. unmap any of the host from storage.
3. Try creating new storage domain.


Actual results:
Among all hosts if one host dont have connection with storage and if we try adding storage domain then, storage domain gets added successfully. Further another host which don't have connection with storage, move into non-operational state. 

Expected results:
I think, In 3.2, storage domain can be added, but activation of new storage domain should fail if one of the host fails to access it. Hypervisor should not go to Non-Operational while trying to activate a
new storage domain.

Or

RHEV should check if all hosts in Data center have access to the storage domain, before adding the storage domain. If any of the host don't have access then, RHEV should throw some error message while adding storage it-self. For ex: "host <hostname> dont have access to the storage".


Additional info:

Comment 4 Liron Aravot 2014-03-02 13:50:10 UTC
Allon, how do you want to handle it? possibly we can make it to be a 3.5 RFE, basically same issue as "prepare for maintenace" (here it will be "prepare for activation or something like that) if we do want to proceed with it.

Comment 5 Allon Mureinik 2014-03-11 10:03:39 UTC
(In reply to Liron Aravot from comment #4)
> Allon, how do you want to handle it? possibly we can make it to be a 3.5
> RFE, basically same issue as "prepare for maintenace" (here it will be
> "prepare for activation or something like that) if we do want to proceed
> with it.

We need to see how this fits in our 3.5 capacity, but definitely an RFE that should be tackled.

Comment 10 Klaas Demter 2017-11-15 08:59:31 UTC
could this bring down many hypervisors or even whole clusters if they can't access the storage domain? I only had the problem with one hypervisor like the the original reporter.

Comment 12 Klaas Demter 2018-05-04 06:46:16 UTC
This is fixed in 4.1, can be closed

Comment 13 Franta Kust 2019-05-16 13:03:28 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.