Bug 1587885 - [downstream clone - 4.2.4] [RFE] Need a way to track how many logical volumes consumed in a storage domain and alert when it gets full
Summary: [downstream clone - 4.2.4] [RFE] Need a way to track how many logical volumes...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.2.3
Hardware: All
OS: All
unspecified
medium
Target Milestone: ovirt-4.2.4
: 4.2.4
Assignee: Tal Nisan
QA Contact: Avihai
URL:
Whiteboard:
Depends On: 1580128
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-06 08:55 UTC by RHV bug bot
Modified: 2022-07-09 09:56 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-4.2.4.4
Doc Type: If docs needed, set a value
Doc Text:
The storage domain's General sub-tab in the Administration Portal now shows the number of images on the storage domain under the rubric "Images", this corresponds to the number of LVs on a block domain.
Clone Of: 1580128
Environment:
Last Closed: 2018-06-27 10:02:42 UTC
oVirt Team: Virt
Target Upstream Version:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-36848 0 None None None 2021-09-09 14:35:45 UTC
Red Hat Product Errata RHSA-2018:2071 0 None None None 2018-06-27 10:03:32 UTC
oVirt gerrit 91959 0 master MERGED webadmin: Show the number of images in the storage domain general tab 2021-01-01 04:58:47 UTC
oVirt gerrit 91983 0 ovirt-engine-4.2 MERGED webadmin: Show the number of images in the storage domain general tab 2021-01-01 04:58:50 UTC

Description RHV bug bot 2018-06-06 08:55:54 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1580128 +++
======================================================================

We need a clear indicator of how full is a storage domain, and alerts when a storage domain becomes too full (e.g. 90% of max lvs).

The actual limit on number of logical volumes is currently 1947 volumes.
This limit is the size of the leases volume (2GiB), which currently is never
extended. If we add support for extending this volume when needed, we
can support more logical volumes.

We also need a supported way to look up how many LVs are in use, either via a script which is part of engine and maintained by engine developers, or adding this info in the UI.

Here is more detail.

Every snapshot consumes:
- one logical volume for every disk included in the snapshot
- one logical volume for memory snapshot if memory snapshot was enabled
  but the memory volume may be created on another storage domain
- logical volume for configuration data (not sure it is still used)

The limit of 1000 volumes is just an arbitrary number we invented.

We could find the number of LVs in use by cooking up a database query, but accessing the database directly is a bad idea. The query will break when
we change the schema, and we must have the flexibility to change the 
schema whenever we like to.

(Originally by Greg Scott)

Comment 13 RHV bug bot 2018-06-06 08:56:42 UTC
Created attachment 1447858 [details]
Screenshot

(Originally by Tal Nisan)

Comment 14 RHV bug bot 2018-06-06 08:56:45 UTC
Added the number of images for each data domain as seen in the attachment

(Originally by Tal Nisan)

Comment 15 Yaniv Kaul 2018-06-07 10:11:42 UTC
This was targeted to 4.2.5 (and the title says so as well), did it land in 4.2.4 eventually?

Comment 16 Tal Nisan 2018-06-07 15:08:22 UTC
It will in the next 4.2.4 build

Comment 18 Avihai 2018-06-19 12:52:43 UTC
Verified on 4.2.4.4.

As discussed between Greg and Tal the current implementation shows the number of active disks (including memory snapshot disks/OVF/memory metadata) + active snapshot disks in the storage domain.

Looks good both in block & file storage domain & match the DB query:
"Select count(*) FROM image_storage_domain_map WHERE storage_domain_id = %domain_guid%"

Comment 20 errata-xmlrpc 2018-06-27 10:02:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2071

Comment 21 Franta Kust 2019-05-16 13:06:13 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.