RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 604680 - resizing existing PV does weird things
Summary: resizing existing PV does weird things
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda
Version: 6.0
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Ales Kozumplik
QA Contact: Release Test Team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-16 13:38 UTC by Ales Kozumplik
Modified: 2014-09-30 23:39 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-06-23 08:37:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
storage.log (167.38 KB, text/plain)
2010-06-16 13:39 UTC, Ales Kozumplik
no flags Details
anaconda.log (8.21 KB, text/plain)
2010-06-16 13:39 UTC, Ales Kozumplik
no flags Details

Description Ales Kozumplik 2010-06-16 13:38:46 UTC
RHEL6.0-20100615.n.1

Steps to Reproduce:
1. have a machine where rhel6 is already installed, the default lvm partitioning. (not sure if this is necessary)
2. run the installator again, choose fresh install
3. replace existing linux systems, review and modify partitioning layout
4. on the partitioning screen delete the volume group
5. edit the physical LVM volume, 'fill to maximum..' is checked. change this to 'fill all space..' and enter 5000
6. add another partition with 'fill to maximum'
  
Actual results:
The new partition will take all the free space. The original LVM is just 38 MB and is sd3 (it was sda2)

Expected results:
sda2 (the physical lvm) stays 5000, the new sda3 partition takes the rest

Additional info:
This is vmware with an 8 GB disk.

Comment 1 Ales Kozumplik 2010-06-16 13:39:17 UTC
Created attachment 424448 [details]
storage.log

Comment 2 Ales Kozumplik 2010-06-16 13:39:54 UTC
Created attachment 424449 [details]
anaconda.log

Comment 4 RHEL Program Management 2010-06-16 13:53:00 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 5 David Lehman 2010-06-18 19:36:07 UTC
diff --git a/storage/partitioning.py b/storage/partitioning.py
index 3e85419..16b4c5a 100644
--- a/storage/partitioning.py
+++ b/storage/partitioning.py
@@ -73,7 +73,7 @@ def _createFreeSpacePartitions(anaconda):
             fmt_args = {}
         part = anaconda.id.storage.newPartition(fmt_type=fmt_type,
                                                 fmt_args=fmt_args,
-                                                size=1,
+                                                size=500,
                                                 grow=True,
                                                 disks=[disk])
         anaconda.id.storage.createDevice(part)


This would be a good start. On rawhide, PartitionDevice has a default size of 500MB, so we do not specify any size in the above code. At any rate, a size of 1 makes very little sense.

There are two obvious ways to handle the rates at which growable partition requests are grown. The first is to grow all partitions at the same rate, regardless, which seems to be what you expect. The second, which is what we do, is to grow requests at a rate that is proportional to the base size of each request, ie: a request of size 10MB will grow at a rate of 10 sectors for every 1 sector a request of size 1MB grows. This allows users to control the sizes of the partitions relative to each other. There is certainly some argument for either approach.

Please don't assign this bug to me unless you want me to pull the default size patches from rawhide into rhel6.

Comment 6 Ales Kozumplik 2010-06-21 13:58:53 UTC
> Please don't assign this bug to me unless you want me to pull the default size
> patches from rawhide into rhel6.    

Okay then, I'll take a look when I have the time. In the meantime: to me it looks like the completely wrong sizes are just one part of the problem. The other one is moving the volumes on the partitions (sda2 becomes sda3, see the bug description). Or is that something we do on purpose?

Thanks.
Ales

Comment 7 David Lehman 2010-06-21 20:50:58 UTC
(In reply to comment #6)
> > Please don't assign this bug to me unless you want me to pull the default size
> > patches from rawhide into rhel6.    
> 
> Okay then, I'll take a look when I have the time. In the meantime: to me it
> looks like the completely wrong sizes are just one part of the problem. The
> other one is moving the volumes on the partitions (sda2 becomes sda3, see the
> bug description). Or is that something we do on purpose?

How is it wrong? The order of the partitions on disk does not matter. IOW, it's not on purpose, but it's also nothing to avoid.

Comment 8 Ales Kozumplik 2010-06-23 07:28:12 UTC
happens on kvm too. so this is not vmware specific.

Comment 9 Ales Kozumplik 2010-06-23 08:37:52 UTC
(In reply to comment #5)
> regardless, which seems to be what you expect. The second, which is what we do,
> is to grow requests at a rate that is proportional to the base size of each
> request, ie: a request of size 10MB will grow at a rate of 10 sectors for every
> 1 sector a request of size 1MB grows. This allows users to control the sizes of

I reread this again and realized you are in fact right and this works as expected, and the implementation seems to be doing that too. If the partition renames aren't an issue either (Comment 7), we can close this as not a bug.


Note You need to log in before you can comment on or make changes to this bug.