Bug 1899571 - after pvmove-ing LV from PV1 that uses read-ahead X to PV2 that uses read-ahead Y - LV on new destination should deduce RA from PV2.
Summary: after pvmove-ing LV from PV1 that uses read-ahead X to PV2 that uses read-a...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.4
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1778977
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-19 15:20 UTC by Zdenek Kabelac
Modified: 2022-05-19 07:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1778977
Environment:
Last Closed: 2022-05-19 07:25:27 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Zdenek Kabelac 2020-11-19 15:20:04 UTC
+++ This bug was initially created as a clone of Bug #1778977 +++

Description of problem:
This is the issue requested to be tested in: https://bugzilla.redhat.com/show_bug.cgi?id=1722860#c9

And shown not to work in:
https://bugzilla.redhat.com/show_bug.cgi?id=1722860#c11


lvm2-2.02.186-2.el7.x86_64

# cat /sys/block/sd{a..b}/queue/read_ahead_kb
256
128

not using any 'preset' RA value for lv (reference to second sentence in Comment 9):
# lvcreate -n mylv -L100M vg /dev/sda
  Logical volume "mylv" created.

# ls -la /dev/mapper/vg-mylv
lrwxrwxrwx. 1 root root 7 Oct 10 11:25 /dev/mapper/vg-mylv -> ../dm-2

# lvs -o lv_name,devices vg
  LV   Devices
  mylv /dev/sdb(0)

# cat /sys/block/dm-2/queue/read_ahead_kb
128

# pvmove /dev/sdb /dev/sda
  /dev/sdb: Moved: 20.00%
  /dev/sdb: Moved: 100.00%


lv should have deduced RA of 256kb from /dev/sda but remained unchanged:
# cat /sys/block/dm-2/queue/read_ahead_kb
128

-----


Cloned for further investigation if this can by improved.

Comment 3 RHEL Program Management 2022-05-19 07:25:27 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.