Bug 892269

Summary: Anaconda 'free space' logic doesn't handle containers properly
Product: [Fedora] Fedora Reporter: Adam Williamson <awilliam>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED DUPLICATE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rawhideCC: anaconda-maint-list, bugzilla, cquike, duffy, g.kaviyarasu, jonathan, jpriddy, kparal, plarsen, robatino, satellitgo, sbueno, vanmeeuwen+fedora
Target Milestone: ---Keywords: CommonBugs
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard: https://fedoraproject.org/wiki/Common_F18_bugs#lvm-free-space
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-02-01 22:05:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 834090    
Attachments:
Description Flags
mockup of 'free space' bar showing free space in VGs none

Description Adam Williamson 2013-01-06 08:50:44 UTC
Ugh, that's an ugly summary, but hopefully I can explain better with the steps to reproduce:

1. Install F18 (RC1) using defaults (guided partitioning, LVM)
2. Boot back to the installer
4. Go to custom partitioning
5. Note that 'free space' is a very small number - 969.23kB in my test
6. Reduce the size of the existing root or home LV and hit Apply
7. Note that the displayed size of the LV changes, but the 'free space' calculation does not change - it is still 969.23kB
8. Create a new mount point for / for the new installation. It will be created as a new LV within the existing VG, but with a size of 969.23kB (or whatever the 'free space' is). anaconda will refuse to let you make it even bigger, even if there should be plenty of space in the VG which you just freed up by reducing the size of one of the existing LVs.

Affects at least TC4 and RC1, probably earlier.

Comment 1 Adam Williamson 2013-01-06 08:52:16 UTC
Proposing as a blocker: as I read it, this effectively means you can't install alongside an existing full-disk LVM install (of anything) unless you do the shrink outside of anaconda. If you have an existing VG taking up all your space, there's no way to shrink the VG itself in the anaconda UI, so since this bug exists, your only option is to blow it away.

Comment 2 Chris Murphy 2013-01-06 08:56:01 UTC
Confirmed for both single and multiple disk VGs. If all PE's in the VG are allocated to LVs, it's impossible to install F18 without first deleting one or more LVs. Even if there's lots of reclaimable free space by LV shrinking.

Comment 3 Adam Williamson 2013-01-06 08:57:22 UTC
The funny thing is, I have a record that I did a test very like this two days ago, and it worked:

http://slashdot.org/comments.pl?sid=3358453&cid=42480831

but now I can't make it work for the life of me. I seem to be able to reproduce this problem every time, with TC3, TC4 and RC1. TC3 dates back to Dec 17th, so I must've been using at least TC3 for the test referenced in that comment.

Comment 4 Adam Williamson 2013-01-06 09:05:41 UTC
Aha. Got it.

So step 8 is the key one. If you specify a size at that step, it will work 'correctly'.

So I have a 12GB previous-install 'root' LV. If I resize that to 6GB, then create the new mount point, set its mount point as / and its size as 5000MB, that works: it is created as a new LV within the existing VG, size 5000MB.

But if you don't enter an explicit size for the new LV, it gets created only as big as the free space check thinks is available.

You only get one shot to make it big :) Once you create it with a given size, _that's as big as it can be_.

So if you left the size blank and got a 969kB LV, you can't make it any bigger. Game over. If you create it manually as size 5000MB, even though in theory you should still have 1GB of space, you can't then edit it to be 6000MB. Or 5500MB. Or 5001MB. It can only be as big as you created it.

if you edit to be _smaller_, that size becomes the new max size. So if you edit it down to 4000MB, you cannot edit it back up to 5000MB. You can only ever edit it smaller, never larger.

It's very clearly the 'available space' check stuffing things up here, but at least it is workaroundable. We might possibly get away with documenting this, I guess. Also nominating as NTH, and CommonBugs.

Comment 5 Adam Williamson 2013-01-07 16:23:00 UTC
*** Bug 892494 has been marked as a duplicate of this bug. ***

Comment 6 Adam Williamson 2013-01-07 16:25:20 UTC
as 892494 points out, behaviour is the same if you delete an existing LV, rather than shrinking one. It's not surprising, really - the bug basically boils down to 'bad handling of free space within a container'.

Comment 7 Adam Williamson 2013-01-07 18:38:13 UTC
Discussed at 2013-01-07 QA meeting acting as a blocker review meeting: http://meetbot.fedoraproject.org/fedora-meeting/2013-01-07/fedora-qa.2013-01-07-16.01.log.txt . Based on the severity of this bug we were minded to accept it as a blocker. However, dlehman says the only plausible fix at this point - without serious re-engineering that is inappropriate for this release phase - would be "basically don't cap specified max for new mountpoints based on free space". That's a technically simple change but it could potentially cause other problems elsewhere which we would not know about and would require more testing to expose: it might, overall, make things worse. It also doesn't really 'fix' this bug properly, it just gives you more of an opportunity to work around it (and also to shoot yourself in the foot). It would not solve the case where you don't enter a size for the new mount point when creating it.

Since this proposed 'fix' is so inadequate and no other fix appears possible at this point in time, we reluctantly decided to leave this behaviour alone and document it: i.e., reject the bug as a blocker. It is technically accepted as NTH, but we would not accept the fix described above, please do not push such a change into the next anaconda build. We would consider a less risky and more useful fix, but please propose it for discussion before pushing it.

Comment 8 Adam Williamson 2013-01-24 06:45:02 UTC
Nominating this for F19 Final - seems like something we should have fixed for F19.

I'm also making this more general, now we have the opportunity to 'fix it properly'. The underlying problem here is that the concept of a single pool of 'free space' that anaconda currently has just doesn't really work. I don't think we can sensibly consider 'free space inside a container' and 'unpartitioned space' as the same pool in any way; we need to keep track of these two things separately, and probably display them separately. I would think free space within all existing and to-be-created VGs should be tracked and displayed to the user separately, and separately from the 'free, unpartitioned space' counter. Possibly it should be possible to decide which VG an LV will be part of from the same place you decide what disk a regular partition will be on.

Pete knows what we do with btrfs, that's probably its own bug.

Comment 9 Adam Williamson 2013-01-24 06:45:25 UTC
Clearing WB fields.

Comment 10 Adam Williamson 2013-01-24 06:45:54 UTC
CCing Mo as this may have UI implications.

Comment 11 Adam Williamson 2013-01-24 07:33:43 UTC
Created attachment 686506 [details]
mockup of 'free space' bar showing free space in VGs

Here's a quick-n-dirty mockup of how this could look. No bonus points for spotting the existence of an 'arch' vg is very unlikely in this case, and also that I can't add up. :)

Comment 12 Chris Murphy 2013-01-24 08:12:56 UTC
"Available" is vague. For "Available Space" I suspect this is really "unallocated space" which are (useful) sectors not allocated in the partition table.

For a VG, this is curiously challenging. Does "available" mean unallocated PE's (i.e. space allocated to the VG, but not allocated to an LV)? Or does "available" mean unallocated PE's and also unused space in LVs? Either way this is answered there are troubling consequences.

If the former, since most of the time all VG space is allocated to the LVs in it, you could have a largely unused VG, yet it's reported as having 0 GB available space.

If the latter, what is supposed to happen with 30 GB "available" only in LVs, but no unallocated PEs, and the user decides to add an LV? This means one or more LVs must be shrunk. But how is this done for the user?

So, I'd say "available" here too isn't a great term. It probably should be "unallocated space in VG".

While I understand the presence of Total Space, I don't see how it helps me make any decisions and thus I find it superfluous.

Comment 13 Adam Williamson 2013-01-24 08:37:24 UTC
Chris: I agree it's vague, but the obvious objection to "Unallocated" is that it's not very intuitive. You are a disk geek and know what "Unallocated" means. Not everyone does.

For a VG, I was thinking unallocated PEs. We already provide a mechanism for reducing the size of existing LVs: select one, and change its size in the right-hand pane. Simple enough. The original bug here is that, once you do so, the 'available' / 'unallocated' space within the VG is not counted or shown to the user. I don't think we want to go around automatically shrinking LVs when the user tries to add one without sufficient unallocated space available: we're in *manual* partitioning, here.

I agree that 'unallocated' is more correct than 'available' and will have useful significance for advanced users. Given that this is the *manual* partitioning screen, we might argue it is a better choice. Of course, a word that is both widely understandable and more accurate than 'available' would be the *ideal* choice.

Comment 14 Chris Murphy 2013-01-24 19:03:48 UTC
The purpose of the term "unallocated" in the first label, is not so much to be technically correct, but to avoid "Available Space" from implying "All Available Space".

If the first label is called "Available Space", I expect its value to be as large or larger than the combined "Available Space" for all VGs, plus possibly space not in the VG's but likewise available. But that first label isn't "all available space" it's something else; it's Available non-Pooled Space. I really don't think we want to call it that, though. You presumably want to convey that "Available Space" could be used for a conventional partition, an LVM or Btrfs Pool, or even RAID (a variation on a conventional partition).

Unallocated Space or Free Space, better convey it's not the same kind of thing as "available space". And in a sense, it's not yet available until allocated to a partition, an array, a VG, or Btrfs volume.

Comment 15 Adam Williamson 2013-02-01 20:40:02 UTC
So we're discussing this bug at present, I think we're making progress but it's slightly unclear as there's really three elements:

1) CODE BUG the 'spinbutton' element used to edit the size of an LV within an existing VG is incorrectly capping the maximum possible size

2) CODE BUG if you have lots of space inside an existing VG but very little unpartitioned space, and you create a new mount point without specifying a size, it gets created as an LV inside the existing VG but with its size as the amount of unpartitioned space

3) UI we should somehow inform the user of the space available inside existing containers - Mo doesn't like my mockup, but perhaps inside the 'Volume Group' dropdown we could indicate size and available space of each volume group in grey next to the name?

Would it make things clearer if we separated these three into separate bugs?

Comment 16 Adam Williamson 2013-02-01 22:05:53 UTC
dlehman agrees we should split them up. this bug has gotten kind of long, so I think it might be best if we just close it and file each one as a new bug. I'll do that.

1) https://bugzilla.redhat.com/show_bug.cgi?id=906906 (editing LVs within an existing VG size bug)
2) https://bugzilla.redhat.com/show_bug.cgi?id=906908 (creating new LVs within existing VG size bug)
3) https://bugzilla.redhat.com/show_bug.cgi?id=906915 (UI doesn't provide info on space in containers)

Those three bugs should cover the separate things we were discussing under this heading, and hopefully reduce confusion. I'll close this one as a dupe of one of them.

*** This bug has been marked as a duplicate of bug 906906 ***

Comment 17 Peter Larsen 2013-02-08 02:24:03 UTC
The installer doesn't allow for encryption setup of individual LVs either. While the checkbox is listed for each LV, the encryption covers the whole PV - not the individual LV.  I have not found a work-around to this. If I launch the installer, manually setup partitions/filesystems/encryptions I cannot get the installer to read the current setup and assign mount-points. 

Major issues:
1. Reading existing setup. Even existing VG names are not listed.
2. Being able to simply assign mount points and a potential "reformat" on existing structure seems to be completely missing in the interface.
3. Encryption is only available on the PV level

Minor issues:
1. The password complexity check seems off. Even non-words and random letters seems to only get 1 or 2 bars of "complexity". Setting encryption password does not allow you to continue if the password is deemed too simple - this should be corrected.
2. Max size of a new install's VG is calculated as a sum of all mount points. As a result, the VG does not allocate the whole disk and allow for later additions of space.  If you start with a 256GB HDD, and do a default install, you end up with about 150GB of unallocated, unpartitioned space. The VG space should either be specified separately, or default to take up all disk-space.  Having an advanced option where VG space could be managed would be best.

Comment 18 Máirín Duffy 2013-02-08 13:37:33 UTC
Hey Peter, let's open a separate bug on that, okay? I know David L. is aware of that issue and intends to fix it, I have it written out on my list of issues but no bug number. I'm going to copy-paste your comment here into a new bug so we can track it.

Comment 19 Máirín Duffy 2013-02-08 13:45:32 UTC
Peter, here's the new bug: https://bugzilla.redhat.com/show_bug.cgi?id=909228