Bug 882671
Summary: | PRD32 - if a host doesn't see any storage domains, it should move to none-operational - even if it is the last host up | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Barak <bazulay> | ||||||
Component: | ovirt-engine | Assignee: | mkublin <mkublin> | ||||||
Status: | CLOSED NOTABUG | QA Contact: | vvyazmin <vvyazmin> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 3.1.0 | CC: | acathrow, bazulay, dyasny, hateya, iheim, jturner, lpeer, Rhev-m-bugs, sgordon, sgrinber, yeylon, ykaul | ||||||
Target Milestone: | --- | ||||||||
Target Release: | 3.3.0 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | infra | ||||||||
Fixed In Version: | sf3 | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2013-05-27 11:35:17 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 912054 | ||||||||
Bug Blocks: | 915537 | ||||||||
Attachments: |
|
Description
Barak
2012-12-02 15:24:22 UTC
This was actually fixed on upstream by commit http://gerrit.ovirt.org/#/c/9566/ The above commit solves also Bug 844350 Hence moving to POST, I don't close as duplicate as I would like the 2 issues to be tested. Not exactly understand requirements. What I did: 1. If host can not connect to pool it will not moved to status UP, event it is a only host in pool. 2. If we performed a failover and we deactivated a last storage domain , the pool should be moved to Maintenance. When pool in maintenance status no failover process should run. These is a intention in code at least for last 2 years (code is written in such way, can be seen in DeactivateStorageDomainCommand). If it is not working in a such way it is a bug. Setting docs_scoped- here, as per comment # 6 it seems like what is actually happening here is we are fixing a regression that was introduced when we added the autorecovery paths. I don't think there is any documentation impact here. Failed on RHEVM 3.2 - SF05 environment (FC and iSCSI): RHEVM: rhevm-3.2.0-6.el6ev.noarch VDSM: vdsm-4.10.2-5.0.el6ev.x86_64 LIBVIRT: libvirt-0.10.2-17.el6.x86_64 QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64 SANLOCK: sanlock-2.6-2.el6.x86_64 Steps to Reproduce: 1. Create FC or iSCSI DC environment with two hosts 2. Create a multiple SD's 3. From first host, block connection to all SD's 4. From second host, block connection to SD's, except one SD 5. From second host, block connection to last SD Second host stay in status UP. Created attachment 693959 [details]
## Logs vdsm, rhevm
Created attachment 693960 [details]
## Logs vdsm, rhevm (iSCSI)
Like I wrote, I did not understand requirements and what I did: 1. If host can not connect to pool it will not moved to status UP, event it is a only host in pool. 2. If we performed a failover and we deactivated a last storage domain , the pool should be moved to Maintenance. When pool in maintenance status no failover process should run. These is a intention in code at least for last 2 years (code is written in such way, can be seen in DeactivateStorageDomainCommand). If it is not working in a such way it is a bug. I did not solve a case described here and I had not any intention to do these Actually only when all domains are Inactive, I can move host to NonOperationals Barak, isn't this bug irrelevant in light of the changes Liron made in the lifecycle? I agree with comment #13, this bug is not relevant any more, As we chose a different path to solve the various issues with the host life-cycle. Hence moving this bug to CLOSE NOTABUG |