Hide Forgot
+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1580128 +++ ====================================================================== We need a clear indicator of how full is a storage domain, and alerts when a storage domain becomes too full (e.g. 90% of max lvs). The actual limit on number of logical volumes is currently 1947 volumes. This limit is the size of the leases volume (2GiB), which currently is never extended. If we add support for extending this volume when needed, we can support more logical volumes. We also need a supported way to look up how many LVs are in use, either via a script which is part of engine and maintained by engine developers, or adding this info in the UI. Here is more detail. Every snapshot consumes: - one logical volume for every disk included in the snapshot - one logical volume for memory snapshot if memory snapshot was enabled but the memory volume may be created on another storage domain - logical volume for configuration data (not sure it is still used) The limit of 1000 volumes is just an arbitrary number we invented. We could find the number of LVs in use by cooking up a database query, but accessing the database directly is a bad idea. The query will break when we change the schema, and we must have the flexibility to change the schema whenever we like to. (Originally by Greg Scott)
Created attachment 1447858 [details] Screenshot (Originally by Tal Nisan)
Added the number of images for each data domain as seen in the attachment (Originally by Tal Nisan)
This was targeted to 4.2.5 (and the title says so as well), did it land in 4.2.4 eventually?
It will in the next 4.2.4 build
Verified on 4.2.4.4. As discussed between Greg and Tal the current implementation shows the number of active disks (including memory snapshot disks/OVF/memory metadata) + active snapshot disks in the storage domain. Looks good both in block & file storage domain & match the DB query: "Select count(*) FROM image_storage_domain_map WHERE storage_domain_id = %domain_guid%"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2071
BZ<2>Jira Resync