Bug 1332881 - Storage isn't properly deleted
Summary: Storage isn't properly deleted
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.0
Hardware: All
OS: All
unspecified
low
Target Milestone: ovirt-4.0.0-rc
: ---
Assignee: Daniel Erez
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-04 09:25 UTC by Juan Hernández
Modified: 2019-10-10 12:06 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1328087
Environment:
Last Closed: 2016-05-25 11:55:50 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 Allon Mureinik 2016-05-05 07:38:38 UTC
Juan, what's the AI here?

Comment 2 Juan Hernández 2016-05-05 08:10:36 UTC
According to Felix and Sergio they deleted a NFS storage domain, using the GUI. Then they tried to add it again (same server, same path, etc) and the system refused to add it saying that it was already configured. That is what induced them to delete it from the database. So the first action item is to check if this can really happen. I wasn't able to reproduce it in my environment.

Comment 3 Allon Mureinik 2016-05-05 10:18:45 UTC
Daniel - please take a look at this. I can't think of a "real" way to reproduce this, at least not in new engine versions, but please keep me honest.

Comment 4 Yaniv Lavi 2016-05-09 10:59:32 UTC
oVirt 4.0 Alpha has been released, moving to oVirt 4.0 Beta target.

Comment 8 Daniel Erez 2016-05-25 11:55:50 UTC
(In reply to Allon Mureinik from comment #3)
> Daniel - please take a look at this. I can't think of a "real" way to
> reproduce this, at least not in new engine versions, but please keep me
> honest.

It could be an issue of a failure with cleaning storage connection, we had a few similar issues in previous versions, it should be already resolved in new engine versions. I've couldn't reproduce the issue on latest build, please reopen if reproduced again.


Note You need to log in before you can comment on or make changes to this bug.