Bug 1105513 - Issue / strange behavior with GlusterFS nodes
Summary: Issue / strange behavior with GlusterFS nodes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-webadmin
Version: 3.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.5.0
Assignee: Sahina Bose
QA Contact: Pavel Stehlik
URL:
Whiteboard: gluster
Depends On: 1116585
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-06-06 09:50 UTC by Rene Koch
Modified: 2019-05-20 11:11 UTC (History)
10 users (show)

Fixed In Version: ovirt-engine-3.5.0_beta
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-17 12:37:08 UTC
oVirt Team: Gluster


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 28434 master MERGED engine: Do not access storage domain for gluster only hosts Never
oVirt gerrit 31448 master MERGED engine: Report domain monitoring for virt nodes only Never
oVirt gerrit 31703 ovirt-engine-3.5 MERGED engine: Report domain monitoring for virt nodes only Never

Description Rene Koch 2014-06-06 09:50:31 UTC
Description of problem:
One of the glusterfs hosts, isn't able to mount one of the storage domains attached to the oVirt data center and therefor the status of this host changes every 5 minutes. The real issue here is, that the glusterfs hosts shouldn't mount the storage domains as they are in a cluster with virtualization disabled.

These are the messages which appear in oVirt webadmin:
2014-Jun-04, 13:30 State was set to Up for host gluster02-rz08.
2014-Jun-04, 13:26 Detected change in status of brick x.x.x.x:/export/brick01/vol01 of volume vol01 from DOWN to UP.
2014-Jun-04, 13:25 Host xxx cannot access one of the Storage Domains attached to the Data Center Default. Setting Host state to Non-Operational.


Version-Release number of selected component (if applicable):
oVirt 3.4.1 with CentOS 6.5 (latest updates)


How reproducible:
If you create a datacenter with 2 clusters (1 virtualization only and 1 gluster only) all the gluster cluster hosts will try to mount the storage domains.


Steps to Reproduce:
1. Create a new datacenter (or use default one)
2. Create 2 cluster (1 virtualization only, 1 gluster only)
3. Add hosts to gluster cluster


Actual results:
All storage hosts in the gluster cluster try to mount the storage domains of the datacenter. This works fine during installation but in our setup it failed when rebooting 1 host. But the issue for me is not that it fails because the volume is fine, it's the reason that the storage nodes do mount the storage domains.


Expected results:
The properties of a cluster should be checked whether if the hosts are responsible for virtualization or not and if not, the storage domains of the datacenter shouldn't be mounted.


Additional info:

Comment 1 Sahina Bose 2014-08-13 11:07:35 UTC
Moving it to Assigned, as still see the issue - 	
2014-Aug-13, 04:30
Host 10.70.x.x cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center DC34. Setting Host state to Non-Operational.

Comment 2 Sandro Bonazzola 2014-10-17 12:37:08 UTC
oVirt 3.5 has been released and should include the fix for this issue.


Note You need to log in before you can comment on or make changes to this bug.