Bug 1285063 - Resize of stacked thin-pool should be doing multiple commits
Resize of stacked thin-pool should be doing multiple commits
Status: NEW
Product: LVM and device-mapper
Classification: Community
Component: lvm2 (Show other bugs)
unspecified
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: LVM and device-mapper development team
cluster-qe@redhat.com
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-24 14:15 EST by Zdenek Kabelac
Modified: 2017-12-07 15:37 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-08 08:27:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: lvm‑technical‑solution?
rule-engine: lvm‑test‑coverage?


Attachments (Terms of Use)

  None (edit)
Description Zdenek Kabelac 2015-11-24 14:15:57 EST
Description of problem:

When we resize devices with bigger stack (e.g. thin-pool->tdata->raid)
we may possibly get async with lvm2 metadata and real device state.

This upstream commit: https://www.redhat.com/archives/lvm-devel/2015-November/msg00144.html  ensured the order is correct when using plain LV as data volume,
however we still have problem with preload of bigger stack.

Outline of the problem on current code behavior:

example of command: lvextend -l+10 vg/pool

1. precommit metadata with: bigger raid + bigger _tdata + bigger thin_pool

2. then it preloads and immediately resumes any subdevices of thin-pool.
This means we have for a short moment already  'bigger raid' device
but not yet committed 'metadata' for such raid and if there is any failure here - we end with mismatch between 'real volume size and committed volume size.
This will get more complex when we add 'caching' as another layer.

To fix this we need likely introduce chained commits and new flag for 'in-progress' resize operation and such resize may not be reversible (i.e. raid is not yet supporting size reduction) and might need to be 'finished' (i.e. after power-outage) - so we need correspondin lvconvert --repair support.

Valuable note for dm constrain here:

1. Table load needs all used devices to to be already resumed with proper sizes.
2. When extending device we 'should' not need to suspend with flush.

Version-Release number of selected component (if applicable):
lvm2 2.02.135

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Jan Kurik 2016-02-24 09:01:29 EST
This bug appears to have been reported against 'rawhide' during the Fedora 24 development cycle.
Changing version to '24'.

More information and reason for this action is here:
https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora24#Rawhide_Rebase
Comment 2 Fedora End Of Life 2017-07-25 15:33:28 EDT
This message is a reminder that Fedora 24 is nearing its end of life.
Approximately 2 (two) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 24. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '24'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 24 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged  change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Comment 3 Fedora End Of Life 2017-08-08 08:27:32 EDT
Fedora 24 changed to end-of-life (EOL) status on 2017-08-08. Fedora 24 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.
Comment 4 Zdenek Kabelac 2017-08-14 07:20:12 EDT
moving it an upstream BZ
Comment 5 Zdenek Kabelac 2017-12-07 15:37:37 EST
One of ideas floating on my mind -

currently when we 'write' metadata during resize process - we can mark certain LVs in stack is  'non-suspendable'.

so i.e. when resizing  raid _tdata,
we mark thin-pool as non-suspendable.

This will make sure that operation that will use proper top-level LOCKNAME will avoid suspending/touching nodes ABOVE those we need to work with.

The more complex solution might be possible able to deduce the need to avoid suspend by looking at existing state of tables - but recovery of lvm2 metadata can be more mysterious on some error paths likely.

Note You need to log in before you can comment on or make changes to this bug.