Description of problem: This should have been a straight forward conversion of a vdo stacked on md. SCENARIO - multi_device_md_raid_lvm_import_vdo_convert: Test the conversion of a vdo stack on a multi device md raid volume 'echo y | mdadm --create --verbose /dev/md/lvm_import_vdo_sanity --level=1 --raid-devices=2 /dev/sde1 /dev/sdd1' mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 1952840640K mdadm: automatically enabling write-intent bitmap on large array Continue creating array? mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md/lvm_import_vdo_sanity started. vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity Creating VDO lvm_import_vdo_sanity The VDO volume can address 1 TB in 929 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO lvm_import_vdo_sanity Starting compression on VDO lvm_import_vdo_sanity VDO instance 11 volume is ready at /dev/mapper/lvm_import_vdo_sanity lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da Stopping VDO lvm_import_vdo_sanity vdo: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly vdo: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly Converting VDO lvm_import_vdo_sanity lvm_import_vdo failed to convert volume on top of md raid) May 2 11:30:50 hayes-01 qarshd[428633]: Running cmdline: vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity May 2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: underlying device, REQ_FLUSH: supported, REQ_FUA: supported May 2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: Using write policy async automatically. May 2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: loading device 'lvm_import_vdo_sanity' May 2 11:30:55 hayes-01 kernel: kvdo11:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5 May 2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: starting device 'lvm_import_vdo_sanity' May 2 11:30:56 hayes-01 systemd[1]: Starting Start VDO volume backed by md127... May 2 11:30:56 hayes-01 kernel: kvdo11:journalQ: VDO commencing normal operation May 2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: Setting UDS index target state to online May 2 11:30:56 hayes-01 kernel: uds: kvdo11:dedupeQ: creating index: dev=/dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da offset=4096 size=2781704192 May 2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' started May 2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: resuming device 'lvm_import_vdo_sanity' May 2 11:30:56 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' resumed May 2 11:30:57 hayes-01 kernel: kvdo11:packerQ: compression is enabled May 2 11:30:57 hayes-01 UDS/vdodmeventd[428683]: INFO (vdodmeventd/428683) VDO device lvm_import_vdo_sanity is now registered with dmeventd for monitoring May 2 11:30:57 hayes-01 dmeventd[407530]: Monitoring VDO pool lvm_import_vdo_sanity. May 2 11:30:57 hayes-01 vdo-by-dev[428676]: vdo: WARNING - VDO service lvm_import_vdo_sanity already started; no changes made May 2 11:30:57 hayes-01 vdo[428676]: WARNING - VDO service lvm_import_vdo_sanity already started; no changes made May 2 11:30:57 hayes-01 vdo-by-dev[428676]: Starting VDO lvm_import_vdo_sanity May 2 11:30:57 hayes-01 vdo-by-dev[428676]: VDO instance 11 volume is ready at /dev/mapper/lvm_import_vdo_sanity May 2 11:30:57 hayes-01 systemd[1]: Started Start VDO volume backed by md127. May 2 11:30:57 hayes-01 systemd[1]: qarshd.104.49:5016-10.2.17.116:37910.service: Succeeded. May 2 11:30:57 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.2.17.116:37932). May 2 11:30:57 hayes-01 qarshd[428694]: Talking to peer ::ffff:10.2.17.116:37932 (IPv6) May 2 11:30:58 hayes-01 qarshd[428694]: Running cmdline: cat /etc/vdoconf.yml May 2 11:30:58 hayes-01 systemd[1]: qarshd.104.49:5016-10.2.17.116:37932.service: Succeeded. May 2 11:30:58 hayes-01 systemd[1]: Started qarsh Per-Connection Server (10.2.17.116:37936). May 2 11:30:58 hayes-01 qarshd[428699]: Talking to peer ::ffff:10.2.17.116:37936 (IPv6) May 2 11:30:58 hayes-01 kernel: uds: kvdo11:dedupeQ: Using 16 indexing zones for concurrency. May 2 11:30:59 hayes-01 qarshd[428699]: Running cmdline: lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da May 2 11:30:59 hayes-01 UDS/vdodmeventd[428775]: INFO (vdodmeventd/428775) VDO device lvm_import_vdo_sanity is now unregistered from dmeventd May 2 11:30:59 hayes-01 dmeventd[407530]: No longer monitoring VDO pool lvm_import_vdo_sanity. May 2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: suspending device 'lvm_import_vdo_sanity' May 2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' suspended May 2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: stopping device 'lvm_import_vdo_sanity' May 2 11:30:59 hayes-01 kernel: kvdo11:dmsetup: device 'lvm_import_vdo_sanity' stopped May 2 11:30:59 hayes-01 UDS/vdoprepareforlvm[428784]: NOTICE (vdoprepareforlv/428784) loading index: /dev/disk/by-id/md-uuid-d31d6734:f9e1b5da:18436589:110ca7da offset=4096 May 2 11:30:59 hayes-01 UDS/vdoprepareforlvm[428784]: INFO (vdoprepareforlv/428784) Using 1 indexing zone for concurrency. May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR (vdoprepareforlv/428784) index could not be loaded: UDS Error: Index not saved cleanly (1069) May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: CRITICAL (vdoprepareforlv/428784) fatal error in makeIndex: UDS Error: Index not saved cleanly (1069) May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR (vdoprepareforlv/428784) failed to create index: Unrecoverable error: UDS Error: Index not saved cleanly (132141) May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR (vdoprepareforlv/428784) Failed to make router: Unrecoverable error: UDS Error: Index not saved cleanly (132141) May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: ERROR (vdoprepareforlv/428784) Failed loading index: Unrecoverable error: UDS Error: Index not saved cleanly (132141) May 2 11:31:00 hayes-01 UDS/vdoprepareforlvm[428784]: WARN (vdoprepareforlv/428784) Error closing index: UDS Error: Index session not known (1035) May 2 11:31:00 hayes-01 vdo[428781]: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly May 2 11:31:00 hayes-01 vdo[428781]: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly Version-Release number of selected component (if applicable): kernel-4.18.0-372.5.1.el8 BUILT: Mon Mar 28 10:29:22 CDT 2022 lvm2-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 lvm2-libs-2.03.14-3.el8 BUILT: Tue Jan 4 14:54:16 CST 2022 vdo-6.2.6.14-14.el8 BUILT: Fri Feb 11 14:43:08 CST 2022 kmod-kvdo-6.2.6.14-84.el8 BUILT: Tue Mar 22 07:41:18 CDT 2022
This is reproducible, but it took me 9 iterations of this scenario to hit it again. Is there potentially a timing issue when attempting to import too close to the vdo creation cmd finishing? ============================================================ Iteration 9 of 10 started at Mon May 2 16:31:46 2022 ============================================================ SCENARIO - multi_device_md_raid_lvm_import_vdo_convert: Test the conversion of a vdo stack on a multi device md raid volume 'echo y | mdadm --create --verbose /dev/md/lvm_import_vdo_sanity --level=1 --raid-devices=2 /dev/sdf1 /dev/sdg1' mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: size set to 1952840640K mdadm: automatically enabling write-intent bitmap on large array Continue creating array? mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md/lvm_import_vdo_sanity started. vdo create --force --name lvm_import_vdo_sanity --vdoLogicalSize 500G --device /dev/md/lvm_import_vdo_sanity Creating VDO lvm_import_vdo_sanity The VDO volume can address 1 TB in 929 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Starting VDO lvm_import_vdo_sanity Starting compression on VDO lvm_import_vdo_sanity VDO instance 30 volume is ready at /dev/mapper/lvm_import_vdo_sanity lvm_import_vdo --yes /dev/disk/by-id/md-uuid-d852a530:976bf849:4a8ef1d1:e2728ade Stopping VDO lvm_import_vdo_sanity Converting VDO lvm_import_vdo_sanity vdo: ERROR - Device lvm_import_vdo_sanity could not be converted; vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly vdo: ERROR - vdoprepareforlvm: Failed to convert the UDS index for usage with LVM: UDS Error: Index not saved cleanly lvm_import_vdo failed to convert volume on top of md raid)
This issue appears to go away with an added 10 second sleep in between the md creation, the vdo creation, and the import attempt.