Bug 604680 - resizing existing PV does weird things
resizing existing PV does weird things
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: anaconda (Show other bugs)
6.0
All Linux
low Severity medium
: rc
: ---
Assigned To: Ales Kozumplik
Release Test Team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-06-16 09:38 EDT by Ales Kozumplik
Modified: 2014-09-30 19:39 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-06-23 04:37:52 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
storage.log (167.38 KB, text/plain)
2010-06-16 09:39 EDT, Ales Kozumplik
no flags Details
anaconda.log (8.21 KB, text/plain)
2010-06-16 09:39 EDT, Ales Kozumplik
no flags Details

  None (edit)
Description Ales Kozumplik 2010-06-16 09:38:46 EDT
RHEL6.0-20100615.n.1

Steps to Reproduce:
1. have a machine where rhel6 is already installed, the default lvm partitioning. (not sure if this is necessary)
2. run the installator again, choose fresh install
3. replace existing linux systems, review and modify partitioning layout
4. on the partitioning screen delete the volume group
5. edit the physical LVM volume, 'fill to maximum..' is checked. change this to 'fill all space..' and enter 5000
6. add another partition with 'fill to maximum'
  
Actual results:
The new partition will take all the free space. The original LVM is just 38 MB and is sd3 (it was sda2)

Expected results:
sda2 (the physical lvm) stays 5000, the new sda3 partition takes the rest

Additional info:
This is vmware with an 8 GB disk.
Comment 1 Ales Kozumplik 2010-06-16 09:39:17 EDT
Created attachment 424448 [details]
storage.log
Comment 2 Ales Kozumplik 2010-06-16 09:39:54 EDT
Created attachment 424449 [details]
anaconda.log
Comment 4 RHEL Product and Program Management 2010-06-16 09:53:00 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 5 David Lehman 2010-06-18 15:36:07 EDT
diff --git a/storage/partitioning.py b/storage/partitioning.py
index 3e85419..16b4c5a 100644
--- a/storage/partitioning.py
+++ b/storage/partitioning.py
@@ -73,7 +73,7 @@ def _createFreeSpacePartitions(anaconda):
             fmt_args = {}
         part = anaconda.id.storage.newPartition(fmt_type=fmt_type,
                                                 fmt_args=fmt_args,
-                                                size=1,
+                                                size=500,
                                                 grow=True,
                                                 disks=[disk])
         anaconda.id.storage.createDevice(part)


This would be a good start. On rawhide, PartitionDevice has a default size of 500MB, so we do not specify any size in the above code. At any rate, a size of 1 makes very little sense.

There are two obvious ways to handle the rates at which growable partition requests are grown. The first is to grow all partitions at the same rate, regardless, which seems to be what you expect. The second, which is what we do, is to grow requests at a rate that is proportional to the base size of each request, ie: a request of size 10MB will grow at a rate of 10 sectors for every 1 sector a request of size 1MB grows. This allows users to control the sizes of the partitions relative to each other. There is certainly some argument for either approach.

Please don't assign this bug to me unless you want me to pull the default size patches from rawhide into rhel6.
Comment 6 Ales Kozumplik 2010-06-21 09:58:53 EDT
> Please don't assign this bug to me unless you want me to pull the default size
> patches from rawhide into rhel6.    

Okay then, I'll take a look when I have the time. In the meantime: to me it looks like the completely wrong sizes are just one part of the problem. The other one is moving the volumes on the partitions (sda2 becomes sda3, see the bug description). Or is that something we do on purpose?

Thanks.
Ales
Comment 7 David Lehman 2010-06-21 16:50:58 EDT
(In reply to comment #6)
> > Please don't assign this bug to me unless you want me to pull the default size
> > patches from rawhide into rhel6.    
> 
> Okay then, I'll take a look when I have the time. In the meantime: to me it
> looks like the completely wrong sizes are just one part of the problem. The
> other one is moving the volumes on the partitions (sda2 becomes sda3, see the
> bug description). Or is that something we do on purpose?

How is it wrong? The order of the partitions on disk does not matter. IOW, it's not on purpose, but it's also nothing to avoid.
Comment 8 Ales Kozumplik 2010-06-23 03:28:12 EDT
happens on kvm too. so this is not vmware specific.
Comment 9 Ales Kozumplik 2010-06-23 04:37:52 EDT
(In reply to comment #5)
> regardless, which seems to be what you expect. The second, which is what we do,
> is to grow requests at a rate that is proportional to the base size of each
> request, ie: a request of size 10MB will grow at a rate of 10 sectors for every
> 1 sector a request of size 1MB grows. This allows users to control the sizes of

I reread this again and realized you are in fact right and this works as expected, and the implementation seems to be doing that too. If the partition renames aren't an issue either (Comment 7), we can close this as not a bug.

Note You need to log in before you can comment on or make changes to this bug.