Bug 1327121
Summary: | VDSM reports storage domain as 'either partially accessible or entirely inaccessible' | |||
---|---|---|---|---|
Product: | [oVirt] vdsm | Reporter: | RamaKasturi <knarra> | |
Component: | Core | Assignee: | Dan Kenigsberg <danken> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | RamaKasturi <knarra> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 4.17.23.1 | CC: | bugs, knarra, nlevinki, rhs-bugs, rhsc-qe-bugs, sabose, sasundar, sbonazzo, shtripat, stirabos | |
Target Milestone: | --- | Flags: | rule-engine:
planning_ack?
rule-engine: devel_ack? rule-engine: testing_ack? |
|
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | 1327102 | |||
: | 1361547 (view as bug list) | Environment: |
RHEV RHGS HCI
RHEL 7.2
|
|
Last Closed: | 2016-08-04 07:04:14 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1327102, 1327516 | |||
Bug Blocks: | 1258386, 1361547 |
Description
RamaKasturi
2016-04-14 10:04:09 UTC
Update from Simone :
On 04/12/2016 07:38 PM, Simone Tiraboschi wrote:
> Hi,
> on my opinion the issue is here:
> we call getStorageDomainInfo on the hosted-engine storage domain
> ('1c1ce771-e9e9-4a78-ae28-2006442e6cd6') but for any reasons it fails
> within VDSM ("Domain is either partially accessible or entirely
> inaccessible:)
> and so the error accessing it.
> Now the issue is understanding why VDSM reports it as 'either
> partially accessible or entirely inaccessible'
Is there any impact due to this error? Is hosted-engine --vm-status giving error? There is no impact due this to error but will give a false impression to the user. hosted-engine --vm-status does not give any error. It works fine. Moving to gluster since this seems like a HCI specific issue. If you can reproduce this on non-HCI, please open a different bug with steps to reproduce. I think that this simply happens because, in order to avoid the SPOF issue, we try to mount the hosted-engine gluster volume from localhost:/volume The issue is that obviously localhost differently resolves on different hosts resulting in 'either partially accessible or entirely inaccessible' if just one of the VDSM hosts is not able to talk with the gluster daemon locally running. So using localhost fro gluster, instead of resolving the single point of failure issue on the gluster entry point, create an every point of failure where a single host unable to locally access gluster flags the storage domain as 'either partially accessible or entirely inaccessible'. (In reply to Simone Tiraboschi from comment #5) > I think that this simply happens because, in order to avoid the SPOF issue, > we try to mount the hosted-engine gluster volume from localhost:/volume > The issue is that obviously localhost differently resolves on different > hosts resulting in 'either partially accessible or entirely inaccessible' if > just one of the VDSM hosts is not able to talk with the gluster daemon > locally running. > > So using localhost fro gluster, instead of resolving the single point of > failure issue on the gluster entry point, create an every point of failure > where a single host unable to locally access gluster flags the storage > domain as 'either partially accessible or entirely inaccessible'. Simone, this error was seen when HE storage domain was mounted using one of the servers - not localhost:/engine but server1:/engine With 3.6.7 and the backup-volfile-server support for HE storage domain, have not been able to reproduce this. Kasturi, can you check if you see this in your setup? (In reply to Sahina Bose from comment #7) > With 3.6.7 and the backup-volfile-server support for HE storage domain, have > not been able to reproduce this. Kasturi, can you check if you see this in > your setup? I'd like to CLOSE-WONTFIX if it is not reproducible. Please promptly reproduce or close. 3.6 is gone EOL; Please re-target this bug to a 4.0 release. Do not see this issue happening with 3.6.7 / 3.6.8. will reopen in case this issue is seen again. Based on Comment 10 |