Bug 1496222 - [RFE] Host will not logout from all additional targets when aborting an add domain operation
Summary: [RFE] Host will not logout from all additional targets when aborting an add d...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Idan Shaby
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-26 17:03 UTC by Raz Tamir
Modified: 2022-03-10 17:10 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-06-11 09:02:42 UTC
oVirt Team: Storage
Embargoed:
sbonazzo: ovirt-4.4-


Attachments (Terms of Use)
engine log (347.17 KB, application/x-gzip)
2017-09-26 17:03 UTC, Raz Tamir
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-45075 0 None None None 2022-03-10 17:10:39 UTC

Description Raz Tamir 2017-09-26 17:03:40 UTC
Created attachment 1331213 [details]
engine log

Description of problem:

In case I log into new targets when adding new domain but not finish the process the new targets will not be logged out when the host will put into maintenance.
In case I do add a new storage domain, all targets that the host was logged into them will be cleared when the host will move to maintenance


Version-Release number of selected component (if applicable):
ovirt-engine-4.2.0-0.0.master.20170924221426.git196b802.el7.centos

How reproducible:
100%

Steps to Reproduce:
1. Open new domain dialog and login to few targets
2. Cancel the operation
3. Verify using '# iscsiadm -m session' that the new targets were added
4. Put host into maintenance and verify that the new targets from step 3 are still there
5. with new host repeat step 1 and finish the addition of new storage domain
6. put host into maintenance and verify that the target got cleared 

Actual results:
Explained above

Expected results:


Additional info:

Comment 1 Red Hat Bugzilla Rules Engine 2017-09-27 10:24:49 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 2 Allon Mureinik 2017-09-27 10:27:28 UTC
I don't think this is a regression.
It's obviously not the best behavior, to say the least, but I think it's always been like that.

On what version did you see this working as you expect?

Comment 3 Raz Tamir 2017-09-27 11:00:37 UTC
(In reply to Allon Mureinik from comment #2)
> I don't think this is a regression.
> It's obviously not the best behavior, to say the least, but I think it's
> always been like that.
> 
> On what version did you see this working as you expect?
rhevm-4.1.6.2-0.1.el7

Comment 4 Allon Mureinik 2017-09-28 11:42:57 UTC
(In reply to Raz Tamir from comment #3)
> (In reply to Allon Mureinik from comment #2)
> > I don't think this is a regression.
> > It's obviously not the best behavior, to say the least, but I think it's
> > always been like that.
> > 
> > On what version did you see this working as you expect?
> rhevm-4.1.6.2-0.1.el7
Logs of such a rub please?

Comment 5 Raz Tamir 2017-09-28 13:54:58 UTC
Seems like my environment was in a bad state when open this bug - This is not a regression.

In 4.1 this works the same.
Also, bug #1496206, which is in the same area is not a regression

Comment 6 Idan Shaby 2018-05-30 11:47:46 UTC
The only real solution for this bug is providing a proper way to manage storage server connections, i.e the relationship between hosts, connections and LUNs.
This actually is an RFE and not a bug. It has a workaround (logging out from the unwanted targets) and thus should, IMHO, be closed/deferred. Tal?

Comment 7 Tal Nisan 2018-06-11 09:02:42 UTC
The solution here seems too complicated and the impact is quite low, I don't think it worth the effort


Note You need to log in before you can comment on or make changes to this bug.