This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2081151 - lvm_import_vdo - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM
Summary: lvm_import_vdo - vdoprepareforlvm: Failed to convert the UDS index for usage ...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: LVM Team
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-02 21:16 UTC by Corey Marthaler
Modified: 2023-09-23 15:55 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-23 15:55:07 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-8269 0 None Migrated None 2023-09-23 15:54:57 UTC
Red Hat Issue Tracker RHELPLAN-120818 0 None None None 2022-05-02 21:18:15 UTC

Description Corey Marthaler 2022-05-02 21:16:39 UTC
Description of problem:
This should have been a straight forward conversion of a vdo stacked on md.

SCENARIO - multi_device_md_raid_lvm_import_vdo_convert:  Test the conversion of a vdo stack on a multi device md raid volume 
'echo y |  mdadm --create --verbose /dev/md/lvm_import_vdo_sanity --level=1 --raid-devices=2 /dev/sde1 /dev/sdd1'
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 1952840640K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/lvm_import_vdo_sanity started.
vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity
Creating VDO lvm_import_vdo_sanity
      The VDO volume can address 1 TB in 929 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO lvm_import_vdo_sanity
Starting compression on VDO lvm_import_vdo_sanity
VDO instance 11 volume is ready at /dev/mapper/lvm_import_vdo_sanity

lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da
Stopping VDO lvm_import_vdo_sanity
vdo: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly
vdo: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly
Converting VDO lvm_import_vdo_sanity
lvm_import_vdo failed to convert volume on top of md raid)


May  2 11:30:50 hayes-01 qarshd[428633]: Running cmdline: vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity
May  2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: underlying device, REQ_FLUSH: supported, REQ_FUA: supported
May  2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: Using write policy async automatically.
May  2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: loading device 'lvm_import_vdo_sanity'
May  2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5
May  2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: starting device 'lvm_import_vdo_sanity'
May  2 11:30:56 hayes-01 systemd[1]: Starting Start VDO volume backed by md127...
May  2 11:30:56 hayes-01 kernel: kvdo11:journalQ: VDO commencing normal operation
May  2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: Setting UDS index target state to online
May  2 11:30:56 hayes-01 kernel: uds: kvdo11:dedupeQ: creating index: dev=/dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da offset=4096 size=2781704192
May  2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' started
May  2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: resuming device 'lvm_import_vdo_sanity'
May  2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' resumed
May  2 11:30:57 hayes-01 kernel: kvdo11:packerQ: compression is enabled
May  2 11:30:57 hayes-01 UDS/vdodmeventd[428683]: INFO   (vdodmeventd/428683) VDO device lvm_import_vdo_sanity is now registered with dmeventd for monitoring
May  2 11:30:57 hayes-01 dmeventd[407530]: Monitoring VDO pool lvm_import_vdo_sanity.
May  2 11:30:57 hayes-01 vdo-by-dev[428676]: vdo: WARNING - VDO service lvm_import_vdo_sanity already started; no changes made
May  2 11:30:57 hayes-01 vdo[428676]: WARNING - VDO service lvm_import_vdo_sanity already started; no changes made
May  2 11:30:57 hayes-01 vdo-by-dev[428676]: Starting VDO lvm_import_vdo_sanity
May  2 11:30:57 hayes-01 vdo-by-dev[428676]: VDO instance 11 volume is ready at /dev/mapper/lvm_import_vdo_sanity
May  2 11:30:57 hayes-01 systemd[1]: Started Start VDO volume backed by md127.
May  2 11:30:57 hayes-01 systemd[1]: qarshd.104.49:5016-10.2.17.116:37910.service: Succeeded.
May  2 11:30:57 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.2.17.116:37932).
May  2 11:30:57 hayes-01 qarshd[428694]: Talking to peer ::ffff:10.2.17.116:37932 (IPv6)
May  2 11:30:58 hayes-01 qarshd[428694]: Running cmdline: cat /etc/vdoconf.yml
May  2 11:30:58 hayes-01 systemd[1]: qarshd.104.49:5016-10.2.17.116:37932.service: Succeeded.
May  2 11:30:58 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.2.17.116:37936).
May  2 11:30:58 hayes-01 qarshd[428699]: Talking to peer ::ffff:10.2.17.116:37936 (IPv6)
May  2 11:30:58 hayes-01 kernel: uds: kvdo11:dedupeQ: Using 16 indexing zones for concurrency.
May  2 11:30:59 hayes-01 qarshd[428699]: Running cmdline: lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da
May  2 11:30:59 hayes-01 UDS/vdodmeventd[428775]: INFO   (vdodmeventd/428775) VDO device lvm_import_vdo_sanity is now unregistered from dmeventd
May  2 11:30:59 hayes-01 dmeventd[407530]: No longer monitoring VDO pool lvm_import_vdo_sanity.
May  2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: suspending device 'lvm_import_vdo_sanity'
May  2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' suspended
May  2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: stopping device 'lvm_import_vdo_sanity'
May  2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' stopped
May  2 11:30:59 hayes-01 UDS/vdoprepareforlvm[428784]: NOTICE (vdoprepareforlv/428784) loading index: /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da offset=4096
May  2 11:30:59 hayes-01 UDS/vdoprepareforlvm[428784]: INFO   (vdoprepareforlv/428784) Using 1 indexing zone for concurrency.
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR  (vdoprepareforlv/428784) index could not be loaded: UDS Error: Index not saved cleanly (1069)
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: CRITICAL (vdoprepareforlv/428784) fatal error in makeIndex: UDS Error: Index not saved cleanly (1069)
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR  (vdoprepareforlv/428784) failed to create index: Unrecoverable error: UDS Error: Index not saved cleanly (132141)
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR  (vdoprepareforlv/428784) Failed to make router: Unrecoverable error: UDS Error: Index not saved cleanly (132141)
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR  (vdoprepareforlv/428784) Failed loading index: Unrecoverable error: UDS Error: Index not saved cleanly (132141)
May  2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: WARN   (vdoprepareforlv/428784) Error closing index: UDS Error: Index session not known (1035)
May  2 11:31:00 hayes-01 vdo[428781]: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly
May  2 11:31:00 hayes-01 vdo[428781]: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly



Version-Release number of selected component (if applicable):
kernel-4.18.0-372.5.1.el8    BUILT: Mon Mar 28 10:29:22 CDT 2022
lvm2-2.03.14-3.el8    BUILT: Tue Jan  4 14:54:16 CST 2022
lvm2-libs-2.03.14-3.el8    BUILT: Tue Jan  4 14:54:16 CST 2022

vdo-6.2.6.14-14.el8    BUILT: Fri Feb 11 14:43:08 CST 2022
kmod-kvdo-6.2.6.14-84.el8    BUILT: Tue Mar 22 07:41:18 CDT 2022

Comment 1 Corey Marthaler 2022-05-02 21:35:37 UTC
This is reproducible, but it took me 9 iterations of this scenario to hit it again. Is there potentially a timing issue when attempting to import too close to the vdo creation cmd finishing?

============================================================
Iteration 9 of 10 started at Mon May  2 16:31:46 2022
============================================================
SCENARIO - multi_device_md_raid_lvm_import_vdo_convert:  Test the conversion of a vdo stack on a multi device md raid volume 
'echo y |  mdadm --create --verbose /dev/md/lvm_import_vdo_sanity --level=1 --raid-devices=2 /dev/sdf1 /dev/sdg1'
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 1952840640K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/lvm_import_vdo_sanity started.
vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity
Creating VDO lvm_import_vdo_sanity
      The VDO volume can address 1 TB in 929 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO lvm_import_vdo_sanity
Starting compression on VDO lvm_import_vdo_sanity
VDO instance 30 volume is ready at /dev/mapper/lvm_import_vdo_sanity

lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d852a530:976bf849:4a8ef1d1:e2728ade
Stopping VDO lvm_import_vdo_sanity
Converting VDO lvm_import_vdo_sanity
vdo: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly
vdo: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly
lvm_import_vdo failed to convert volume on top of md raid)

Comment 2 Corey Marthaler 2022-05-03 18:22:05 UTC
This issue appears to go away with an added 10 second sleep in between the md creation, the vdo creation, and the import attempt.

Comment 4 RHEL Program Management 2023-09-23 15:53:06 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 5 RHEL Program Management 2023-09-23 15:55:07 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.