Bug 1539361

Summary: Reinitialize data-center will generate multiple OVF_STORE disks when deactivating single master storage domain
Product: [oVirt] ovirt-engine Reporter: Michael Burman <mburman>
Component: BLL.StorageAssignee: Eyal Shenitzky <eshenitz>
Status: CLOSED CURRENTRELEASE QA Contact: Elad <ebenahar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.1.2CC: amureini, bugs, ebenahar, eshenitz, lveyde, mburman, tnisan
Target Milestone: ovirt-4.2.2Flags: rule-engine: ovirt-4.2+
rule-engine: blocker+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.2.2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-29 10:55:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Log
none
new engine log none

Description Michael Burman 2018-01-28 09:12:04 UTC
Created attachment 1387134 [details]
Log

Description of problem:
Multiple 'Registering Disk 'OVF_STORE' has been initiated.' messages spamming the audit ui log on register domain flow.

The event log ui is spammed wit multiple 'Registering Disk 'OVF_STORE' messages on DR flow. 

Which seems to be regression. 

Version-Release number of selected component (if applicable):
4.2.1.3-0.1.el7

How reproducible:
100%

Steps to Reproduce:
1. Have a data domain with few VMs and template
2. Detach domain
3. Attach domain

Actual results:
The event log ui got spammed with multiple messages

Expected results:
There shouldn't be multiple events in the ui log.

Comment 1 Allon Mureinik 2018-01-28 21:02:57 UTC
Eyal, can this be related to your recent work?

Comment 2 Red Hat Bugzilla Rules Engine 2018-01-28 21:03:00 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 3 Eyal Shenitzky 2018-01-29 05:29:54 UTC
No, didn't touch around there.

According to the description, this is a DR flow.

Comment 4 Eyal Shenitzky 2018-01-29 06:48:05 UTC
Michael, 

Did you notice how many OVF_STORE disks were in the storage domain prior to the deactivation?

Moreover, I failed to reproduce the bug.
I tried to deactivate a domain with VMs, templates, and VMs that created from the templates as thin.

Did you reproduce it with the steps written above?

Comment 5 Michael Burman 2018-01-29 07:21:01 UTC
I have a lot of OVF_STORE disks, the question is how? and why? no idea.

Seems that it is reproduce for me as i have plenty of such disks..which seems to be a bug on it's own. But i can't understand how i got to this situation in the first place.

Comment 6 Eyal Shenitzky 2018-01-29 08:23:26 UTC
So it seems like there really was a lot of OVF_STORE disks of the storage domain on the environment, which means that this bug is not the root cause of it.

Please try to understand how your environment has more than 2 OVF_STORE disks per storage domain and update the bug accordingly.
If you didn't find the issue / manage to reproduce it, please close this bug.

Comment 7 Michael Burman 2018-01-29 08:31:33 UTC
Yes, too many OVF_STORAGE disks(which is a bug by it's own), but i have no idea what caused. One assumption is that it caused by sub version templates that was removed from one of the templates and caused this mess. 

It is seems that it's not possible to import a template if it's sub version template is missing and i'm not sure if this is the expected behaviour. 
Will try to get to the bottom of that.

Comment 8 Michael Burman 2018-01-29 09:20:58 UTC
I managed to reproduce the bug, and now i have 4 OVF_STORAGE disks.

To reproduce:
1) Create VM
2) Create template from it
3) Create another template + sub version template based on template from step 2
4) Remove the sub version template
5) Detach the data domain
6) From here, on each detach domain, 2 new OVF_STORAGE disks will be added to the domain.

Now i got to situation i have more then 2 OVF_STORAGE disks.

Comment 9 Michael Burman 2018-01-29 09:21:29 UTC
Created attachment 1387666 [details]
new engine log

Comment 10 Eyal Shenitzky 2018-01-31 09:14:01 UTC
Allon, can you please revisit this bug because it is a completely different from the initially reported bug.

Comment 11 Allon Mureinik 2018-01-31 14:37:20 UTC
(In reply to Eyal Shenitzky from comment #10)
> Allon, can you please revisit this bug because it is a completely different
> from the initially reported bug.

This, in fact, sounds worse than the previously reported bug.
I don't quite understand what you're asking me here.

Comment 12 Eyal Shenitzky 2018-01-31 14:47:57 UTC
Got what I needed, just wanted to know if it is still targeted to the same version

Comment 13 Eyal Shenitzky 2018-02-01 10:20:05 UTC
Managed to reproduce with more tight steps:

1) Create data-center with host
2) Add storage domain [A] to the data-center from step 1 (initialize the data-center)
3) Deactivate storage domain A -> 2 OVF_STORE disks was added
5) Detach storage domain A
6) Attach storage domain A back to the data-center from step 1 (initialize the data-center)
7) Deactivate storage domain A -> 2 more OVF_STORE disks was added


This issue reproduces only when data-center is reinitialized by a storage domain.
The template is not related to the bug.

Comment 14 Eyal Shenitzky 2018-02-01 13:56:40 UTC
This bug exists for a long time, remove the regression and blocker flags.

Comment 15 Elad 2018-02-25 16:35:55 UTC
According to the steps from comment #13, OVF_STORE disks are not created after second SD deactivation.


Used:
rhvm-4.2.2.1-0.1.el7.noarch
vdsm-4.20.19-1.el7ev.x86_64

Comment 16 Sandro Bonazzola 2018-03-29 10:55:33 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.