Bug 2173972 - unable to xfs_grow resized vdo virt lv
Summary: unable to xfs_grow resized vdo virt lv
Keywords:
Status: POST
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: 9.2
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On: 2120738
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-28 14:41 UTC by Corey Marthaler
Modified: 2023-08-10 15:41 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2120738
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-150071 0 None None None 2023-02-28 14:43:12 UTC

Description Corey Marthaler 2023-02-28 14:41:15 UTC
+++ This bug was initially created as a clone of Bug #2120738 +++

Description of problem:
This vdo virt volume had been resided using lvextend prior to this xfs_grow attempt.

[root@hayes-01 ~]# lvs -a -o +devices,segtype
  LV               VG         Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices           Type
  snap             vdo_sanity swi-a-s---   6.00g          vdo_lv 0.03                                    /dev/sde1(12800)  linear
  vdo_lv           vdo_sanity owi-aos--- 101.95g vdo_pool                                                vdo_pool(0)       vdo
  vdo_pool         vdo_sanity dwi-------  50.00g                 8.08                                    vdo_pool_vdata(0) vdo-pool
  [vdo_pool_vdata] vdo_sanity Dwi-ao----  50.00g                                                         /dev/sde1(0)      linear

[root@hayes-01 ~]# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vdo_sanity-vdo_lv  100G  833M  100G   1% /mnt/vdo_lv


[root@hayes-01 ~]# xfs_growfs /mnt/vdo_lv
meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

Aug 23 11:03:20 hayes-01 kernel: kvdo621:logQ0: Completing read VIO for LBN 26726527 with error after launch: kvdo: Out of range (2049)
Aug 23 11:03:20 hayes-01 kernel: kvdo621:cpuQ1: mapToSystemError: mapping internal status code 2049 (kvdo: VDO_OUT_OF_RANGE: kvdo: Out of range) to EIO


Version-Release number of selected component (if applicable):
kernel-4.18.0-417.el8    BUILT: Wed Aug 10 15:40:43 CDT 2022

lvm2-2.03.14-6.el8    BUILT: Fri Jul 29 05:40:53 CDT 2022
lvm2-libs-2.03.14-6.el8    BUILT: Fri Jul 29 05:40:53 CDT 2022

vdo-6.2.7.17-14.el8    BUILT: Tue Jul 19 10:05:39 CDT 2022
kmod-kvdo-6.2.7.17-87.el8    BUILT: Thu Aug 11 13:47:21 CDT 2022


How reproducible:
Everytime

--- Additional comment from corwin on 2022-08-23 17:52:51 UTC ---

I believe this is a mismatch in lvm's and vdo's perceptions of the logical size of the vdo device, probably due to a rounding error. In RHEL-9 vdo does more validation of the table line, so this mismatch is detected when the table is loaded rather than when I/O goes off the end of the device.

--- Additional comment from Zdenek Kabelac on 2023-02-03 11:07:41 UTC ---

Question - wasn't accidentally vdo_lv resized while being inactive?

This has been prohibited with recent commit:

https://listman.redhat.com/archives/lvm-devel/2023-January/024535.html

Since I'm seeing there snapshot for this LV and there is no support for resizing 'active' snapshot - but inactive  vdoLV  cannot be resized either - so my guess is the older version of lvm allowed to 'extend' virtual size of inactive vdo volume - and this was not tracked properly inside vdo target.

With above mentioned patch included into the build you should get error while trying to resize inactive vdo LVs (unsure which version will include this patch).

--- Additional comment from Corey Marthaler on 2023-02-22 19:33:18 UTC ---

The current (latest) 9.2 build has the same behavior.

kernel-5.14.0-252.el9    BUILT: Wed Feb  1 03:30:10 PM CET 2023
lvm2-2.03.17-7.el9    BUILT: Thu Feb 16 03:24:54 PM CET 2023
lvm2-libs-2.03.17-7.el9    BUILT: Thu Feb 16 03:24:54 PM CET 2023


[root@virt-008 ~]#  df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vdo_sanity-vdo_lv    100G  2.2G   98G   3% /mnt/vdo_lv

[root@virt-008 ~]# xfs_growfs /mnt/vdo_lv
meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

Feb 22 20:28:54 virt-008 kernel: kvdo8:logQ0: Completing read vio for LBN 26726527 with error after launch: VDO Status: Out of range (1465)
Feb 22 20:28:54 virt-008 kernel: kvdo8:cpuQ1: vdo_map_to_system_error: mapping internal status code 1465 (VDO_OUT_OF_RANGE: VDO Status: Out of range) to EIO

Comment 3 Zdenek Kabelac 2023-07-17 13:12:33 UTC
Upstream patch from upstream release 2.03.19


Note You need to log in before you can comment on or make changes to this bug.