RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1000980 - No error shows when the allocation of new lvm volume set as 0
Summary: No error shows when the allocation of new lvm volume set as 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: python-virtinst
Version: 6.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Giuseppe Scrivano
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 873976 1021037 (view as bug list)
Depends On: 1093980
Blocks: 1021037 1021789 1024339
TreeView+ depends on / blocked
 
Reported: 2013-08-26 09:00 UTC by tingting zheng
Modified: 2016-04-26 15:40 UTC (History)
10 users (show)

Fixed In Version: python-virtinst-0.600.0-20.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1021789 (view as bug list)
Environment:
Last Closed: 2014-10-14 06:23:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1444 0 normal SHIPPED_LIVE python-virtinst bug fix and enhancement update 2014-10-14 01:05:50 UTC

Description tingting zheng 2013-08-26 09:00:50 UTC
Description
No error shows when the allocation of new lvm volume set as 0

Version:
virt-manager-0.9.0-19.el6.x86_64
libvirt-0.10.2-23.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a lvm pool.
2. In the lvm pool,create a new volume.Click "New Volume",set name,set "Max Capacity" as 1000 MB,
   set "Allocation" as a number which is smaller than 1000,eg:10.
3. Click "Finish",error shows:"Sparse logical volumes are not supported, allocation must be equal to capacity".
4. On step 2,if set "Allocation" as 0 or random value eg:"sdll",no error shows.

Actual results:
As description.

Expected results:
Error shows in step 4 as step 3.

Additional info:
If set random value eg "sdll" for volume allocation on other types of pool,no error shows.

Comment 2 Martin Kletzander 2013-08-28 15:15:20 UTC
Allocation 0 means that there is no setting for allocation.  This might be confusing for users, but we need to keep the default value working.  Can we change the default allocation to the same number as capacity in case of logical pool?  Maybe make the widget disabled to disallow changing it.  In any case, there should be also a check for whether the value is a number.

Comment 4 hyao@redhat.com 2013-10-22 06:03:47 UTC
Reproduced the bug with the following packages: 

# rpm -qa libvirt virt-manager
virt-manager-0.9.0-19.el6.x86_64
libvirt-0.10.2-29.el6.x86_64

Create a new volume in the LVM pool
Set "Allocation" as 0 or random value eg:"sdll",no error shows.
Set random value eg "sdll" for volume allocation on other types of pool,no error shows.

Comment 6 Giuseppe Scrivano 2013-12-19 11:34:43 UTC
*** Bug 873976 has been marked as a duplicate of this bug. ***

Comment 8 Giuseppe Scrivano 2014-04-30 07:24:44 UTC
*** Bug 1021037 has been marked as a duplicate of this bug. ***

Comment 9 zhoujunqin 2014-05-06 02:59:00 UTC
I try to verify it with latest packages:
virt-manager-0.9.0-20.el6.x86_64
python-virtinst-0.600.0-20.el6.noarch
libvirt-0.10.2-34.el6.x86_64
but can not get clear warning/error info.
steps:
1. Create a lvm pool.
2. Launch virt-manager: #virt-manager.
3. In the lvm pool,create a new volume.Click "New Volume",set name,set "Max Capacity" as 1000 MB,
   set "Allocation" as a number which is smaller than 1000,eg:10.
4. Click "Finish",got the error info:
Uncaught error validating input: Gtk.Container.add() argument 1 must be gtk.Widget, not bool 

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/createvol.py", line 211, in finish
    if not self.validate():
  File "/usr/share/virt-manager/virtManager/createvol.py", line 268, in validate
    return self.val_err(_("Volume Parameter Error"), str(e))
  File "/usr/share/virt-manager/virtManager/createvol.py", line 280, in val_err
    ret = self.err.val_err(info, details, async=not modal)
  File "/usr/share/virt-manager/virtManager/error.py", line 116, in val_err
    self._simple_dialog(dtype, buttons, text1, text2, title, async)
  File "/usr/share/virt-manager/virtManager/error.py", line 110, in _simple_dialog
    sync=not async)
  File "/usr/share/virt-manager/virtManager/error.py", line 40, in _launch_dialog
    dialog.get_content_area().add(widget)
TypeError: Gtk.Container.add() argument 1 must be gtk.Widget, not bool

this is the same issue as new bug 1093980, so i will try to verify it again until bug 1093980 fixed.

Comment 10 Cui Lei 2014-05-06 03:05:54 UTC
Added depends on 1093980

Comment 11 zhoujunqin 2014-05-07 03:42:22 UTC
Retry verify with latest build:
virt-manager-0.9.0-21.el6.x86_64
python-virtinst-0.600.0-21.el6.noarch

steps:
1. Create a lvm pool.
2. Launch virt-manager: #virt-manager.
3. In the lvm pool,create a new volume.Click "New Volume",set name,set "Max Capacity" as 1000 MB,
   set "Allocation" as a number which is smaller than 1000,eg:10.
4. Click "Finish",got the error info.
"Volume Parameter Error
Sparse logical volumes are not supported, allocation must be equal to capacity"

5. On step3 if set "Allocation" as 0 or random value eg:"sdll",the error shows as step4 after click "Finish" as expected.

A problem found on step3.
   when click "New Volume",i found the default "Allocation" number is "0" not the same number with "Max Capacity"(eg:1000) .
but on rhel7 the default allocation is the same number as capacity in case of logical pool.
And Martin Kletzander also talked this issue on Comment 2.
So Giuseppe Scrivano ,can you help have a look, thanks?

Comment 12 Giuseppe Scrivano 2014-05-23 07:08:00 UTC
from a quick check, virt-manager shows the same behavior on RHEL-7.  Future versions of virt-manager will look very different anyway as it is changed upstream, so I don't think we should pay particular attention to this detail.

Comment 13 zhoujunqin 2014-05-23 08:06:21 UTC
(In reply to Giuseppe Scrivano from comment #12)
> from a quick check, virt-manager shows the same behavior on RHEL-7.  Future
> versions of virt-manager will look very different anyway as it is changed
> upstream, so I don't think we should pay particular attention to this detail.

thanks for your help Giuseppe Scrivano, move to verified.

Comment 14 errata-xmlrpc 2014-10-14 06:23:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1444.html


Note You need to log in before you can comment on or make changes to this bug.