Bug 604680
Summary: | resizing existing PV does weird things | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Ales Kozumplik <akozumpl> | ||||||
Component: | anaconda | Assignee: | Ales Kozumplik <akozumpl> | ||||||
Status: | CLOSED NOTABUG | QA Contact: | Release Test Team <release-test-team-automation> | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | low | ||||||||
Version: | 6.0 | CC: | jzeleny | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | All | ||||||||
OS: | Linux | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2010-06-23 08:37:52 UTC | Type: | --- | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Description
Ales Kozumplik
2010-06-16 13:38:46 UTC
Created attachment 424448 [details]
storage.log
Created attachment 424449 [details]
anaconda.log
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux major release. Product Management has requested further review of this request by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Major release. This request is not yet committed for inclusion. diff --git a/storage/partitioning.py b/storage/partitioning.py index 3e85419..16b4c5a 100644 --- a/storage/partitioning.py +++ b/storage/partitioning.py @@ -73,7 +73,7 @@ def _createFreeSpacePartitions(anaconda): fmt_args = {} part = anaconda.id.storage.newPartition(fmt_type=fmt_type, fmt_args=fmt_args, - size=1, + size=500, grow=True, disks=[disk]) anaconda.id.storage.createDevice(part) This would be a good start. On rawhide, PartitionDevice has a default size of 500MB, so we do not specify any size in the above code. At any rate, a size of 1 makes very little sense. There are two obvious ways to handle the rates at which growable partition requests are grown. The first is to grow all partitions at the same rate, regardless, which seems to be what you expect. The second, which is what we do, is to grow requests at a rate that is proportional to the base size of each request, ie: a request of size 10MB will grow at a rate of 10 sectors for every 1 sector a request of size 1MB grows. This allows users to control the sizes of the partitions relative to each other. There is certainly some argument for either approach. Please don't assign this bug to me unless you want me to pull the default size patches from rawhide into rhel6. > Please don't assign this bug to me unless you want me to pull the default size
> patches from rawhide into rhel6.
Okay then, I'll take a look when I have the time. In the meantime: to me it looks like the completely wrong sizes are just one part of the problem. The other one is moving the volumes on the partitions (sda2 becomes sda3, see the bug description). Or is that something we do on purpose?
Thanks.
Ales
(In reply to comment #6) > > Please don't assign this bug to me unless you want me to pull the default size > > patches from rawhide into rhel6. > > Okay then, I'll take a look when I have the time. In the meantime: to me it > looks like the completely wrong sizes are just one part of the problem. The > other one is moving the volumes on the partitions (sda2 becomes sda3, see the > bug description). Or is that something we do on purpose? How is it wrong? The order of the partitions on disk does not matter. IOW, it's not on purpose, but it's also nothing to avoid. happens on kvm too. so this is not vmware specific. (In reply to comment #5) > regardless, which seems to be what you expect. The second, which is what we do, > is to grow requests at a rate that is proportional to the base size of each > request, ie: a request of size 10MB will grow at a rate of 10 sectors for every > 1 sector a request of size 1MB grows. This allows users to control the sizes of I reread this again and realized you are in fact right and this works as expected, and the implementation seems to be doing that too. If the partition renames aren't an issue either (Comment 7), we can close this as not a bug. |