Bug 1420906 - [RFE] Raid synchronization should known to use 'SSD' with zeroing
Summary: [RFE] Raid synchronization should known to use 'SSD' with zeroing
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: LVM and device-mapper
Classification: Community
Component: lvm2
Version: 2.02.169
Hardware: Unspecified
OS: Unspecified
low
unspecified
Target Milestone: ---
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-09 20:35 UTC by Zdenek Kabelac
Modified: 2019-01-18 15:33 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-18 15:33:36 UTC
Embargoed:
rule-engine: lvm-technical-solution?
rule-engine: lvm-test-coverage?


Attachments (Terms of Use)

Description Zdenek Kabelac 2017-02-09 20:35:39 UTC
Description of problem:

When 'raid' is made from SSD with working zeroing feature (whitelisted in kernel or whatever else that would mean) - raid 'sync' should avoid useless syncing and just 'TRIM' both legs and have raid in sync.

Likely lvm2 should be able to recognize this state and prepare raid/mirror automatically in-sync state.


Version-Release number of selected component (if applicable):
lvm 2.02.169

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Heinz Mauelshagen 2019-01-18 15:33:36 UTC
Triming/discarding any storage device as requested doesn't make sense for various reasons:

- discards can take (too) long depending on their implementation
- the RAID device on top can be written to immediately after creation, thus
  it's not trivial to allow that in parallel with large and slow discard processing
- on upgrades from linear we need to assume the full content need resynchronizing

Closing, because this is an optimization for the very case of raid1/10/4/5 creation
and initial resynchronization (raid6 requires it) which is costly to implement.


Note You need to log in before you can comment on or make changes to this bug.