This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2120738 - unable to xfs_grow resized vdo virt lv
Summary: unable to xfs_grow resized vdo virt lv
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.7
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks: 2173972
TreeView+ depends on / blocked
 
Reported: 2022-08-23 16:16 UTC by Corey Marthaler
Modified: 2023-09-23 18:23 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2173972 (view as bug list)
Environment:
Last Closed: 2023-09-23 18:23:43 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-8293 0 None Migrated None 2023-09-23 18:23:37 UTC
Red Hat Issue Tracker RHELPLAN-132031 0 None None None 2022-08-23 16:25:56 UTC

Description Corey Marthaler 2022-08-23 16:16:11 UTC
Description of problem:
This vdo virt volume had been resided using lvextend prior to this xfs_grow attempt.

[root@hayes-01 ~]# lvs -a -o +devices,segtype
  LV               VG         Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices           Type
  snap             vdo_sanity swi-a-s---   6.00g          vdo_lv 0.03                                    /dev/sde1(12800)  linear
  vdo_lv           vdo_sanity owi-aos--- 101.95g vdo_pool                                                vdo_pool(0)       vdo
  vdo_pool         vdo_sanity dwi-------  50.00g                 8.08                                    vdo_pool_vdata(0) vdo-pool
  [vdo_pool_vdata] vdo_sanity Dwi-ao----  50.00g                                                         /dev/sde1(0)      linear

[root@hayes-01 ~]# df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vdo_sanity-vdo_lv  100G  833M  100G   1% /mnt/vdo_lv


[root@hayes-01 ~]# xfs_growfs /mnt/vdo_lv
meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

Aug 23 11:03:20 hayes-01 kernel: kvdo621:logQ0: Completing read VIO for LBN 26726527 with error after launch: kvdo: Out of range (2049)
Aug 23 11:03:20 hayes-01 kernel: kvdo621:cpuQ1: mapToSystemError: mapping internal status code 2049 (kvdo: VDO_OUT_OF_RANGE: kvdo: Out of range) to EIO


Version-Release number of selected component (if applicable):
kernel-4.18.0-417.el8    BUILT: Wed Aug 10 15:40:43 CDT 2022

lvm2-2.03.14-6.el8    BUILT: Fri Jul 29 05:40:53 CDT 2022
lvm2-libs-2.03.14-6.el8    BUILT: Fri Jul 29 05:40:53 CDT 2022

vdo-6.2.7.17-14.el8    BUILT: Tue Jul 19 10:05:39 CDT 2022
kmod-kvdo-6.2.7.17-87.el8    BUILT: Thu Aug 11 13:47:21 CDT 2022


How reproducible:
Everytime

Comment 1 corwin 2022-08-23 17:52:51 UTC
I believe this is a mismatch in lvm's and vdo's perceptions of the logical size of the vdo device, probably due to a rounding error. In RHEL-9 vdo does more validation of the table line, so this mismatch is detected when the table is loaded rather than when I/O goes off the end of the device.

Comment 2 Zdenek Kabelac 2023-02-03 11:07:41 UTC
Question - wasn't accidentally vdo_lv resized while being inactive?

This has been prohibited with recent commit:

https://listman.redhat.com/archives/lvm-devel/2023-January/024535.html

Since I'm seeing there snapshot for this LV and there is no support for resizing 'active' snapshot - but inactive  vdoLV  cannot be resized either - so my guess is the older version of lvm allowed to 'extend' virtual size of inactive vdo volume - and this was not tracked properly inside vdo target.

With above mentioned patch included into the build you should get error while trying to resize inactive vdo LVs (unsure which version will include this patch).

Comment 3 Corey Marthaler 2023-02-22 19:33:18 UTC
The current (latest) 9.2 build has the same behavior.

kernel-5.14.0-252.el9    BUILT: Wed Feb  1 03:30:10 PM CET 2023
lvm2-2.03.17-7.el9    BUILT: Thu Feb 16 03:24:54 PM CET 2023
lvm2-libs-2.03.17-7.el9    BUILT: Thu Feb 16 03:24:54 PM CET 2023


[root@virt-008 ~]#  df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vdo_sanity-vdo_lv    100G  2.2G   98G   3% /mnt/vdo_lv

[root@virt-008 ~]# xfs_growfs /mnt/vdo_lv
meta-data=/dev/mapper/vdo_sanity-vdo_lv isize=512    agcount=4, agsize=6553600 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Input/output error

Feb 22 20:28:54 virt-008 kernel: kvdo8:logQ0: Completing read vio for LBN 26726527 with error after launch: VDO Status: Out of range (1465)
Feb 22 20:28:54 virt-008 kernel: kvdo8:cpuQ1: vdo_map_to_system_error: mapping internal status code 1465 (VDO_OUT_OF_RANGE: VDO Status: Out of range) to EIO

Comment 8 RHEL Program Management 2023-09-23 18:22:11 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 9 RHEL Program Management 2023-09-23 18:23:43 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.