Bug 1886767 - Failed to open fedora/lvol0 for wiping and zeroing
Summary: Failed to open fedora/lvol0 for wiping and zeroing
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: python-blivet
Version: 34
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Blivet Maintenance Team
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-09 11:22 UTC by Vendula Poncova
Modified: 2022-06-07 20:08 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-07 20:08:36 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
The kickstart file (6.81 KB, text/plain)
2020-10-09 11:23 UTC, Vendula Poncova
no flags Details
storage.log (157.93 KB, text/plain)
2020-10-09 11:24 UTC, Vendula Poncova
no flags Details
anaconda.log (22.72 KB, text/plain)
2020-10-09 11:25 UTC, Vendula Poncova
no flags Details
journalctl (789.68 KB, text/plain)
2020-10-09 11:25 UTC, Vendula Poncova
no flags Details
program.log (5.46 KB, text/plain)
2020-10-09 11:26 UTC, Vendula Poncova
no flags Details
lvm.log (437.43 KB, text/plain)
2020-10-12 09:41 UTC, Vendula Poncova
no flags Details
All logs (334.17 KB, application/x-bzip)
2020-10-12 09:43 UTC, Vendula Poncova
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1872695 0 high CLOSED Cannot create LV with cache when PV is encrypted 2023-05-29 09:56:29 UTC

Description Vendula Poncova 2020-10-09 11:22:26 UTC
Description of problem:
Our kickstart tests for LVM cache are failing.

https://github.com/rhinstaller/kickstart-tests/blob/master/lvm-cache-1.ks.in
https://github.com/rhinstaller/kickstart-tests/blob/master/lvm-cache-2.ks.in

Version-Release number of selected component (if applicable):
anaconda-34.8-1.fc34.x86_64
python3-blivet-3.3.0-2.fc34.noarch

How reproducible:
always

Steps to Reproduce:
1. Create a VM with two 10GiB empty disks.
2. Start an automatic installation with the attached kickstart file.

Actual results:
The scheduled partitioning fails to be created.

Expected results:
The scheduled partitioning is successfully created.

Comment 1 Vendula Poncova 2020-10-09 11:23:41 UTC
Created attachment 1720210 [details]
The kickstart file

Comment 2 Vendula Poncova 2020-10-09 11:24:38 UTC
Created attachment 1720211 [details]
storage.log

Comment 3 Vendula Poncova 2020-10-09 11:25:02 UTC
Created attachment 1720212 [details]
anaconda.log

Comment 4 Vendula Poncova 2020-10-09 11:25:56 UTC
Created attachment 1720213 [details]
journalctl

Comment 5 Vendula Poncova 2020-10-09 11:26:43 UTC
Created attachment 1720214 [details]
program.log

Comment 6 Vendula Poncova 2020-10-09 11:28:40 UTC
From storage.log:

INFO:program:[34] Calling the 'com.redhat.lvmdbus1.Vg.CreateCachePool' method on the '/com/redhat/lvmdbus1/Vg/0' object with the following parameters: '('/com/redhat/lvmdbus1/Lv/2', '/com/redhat/lvmdbus1/Lv/1', 1, {'cachemode': <'writeback'>, '--config': <' devices { preferred_names=["^/dev/mapper/", "^/dev/md/", "^/dev/sd"] } log {level=7 file=/tmp/lvm.log syslog=0}'>})'
INFO:program:[34] Done.
INFO:program:[34] Got error: Failed to call the 'CreateCachePool' method on the '/com/redhat/lvmdbus1/Vg/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Vg', "Exit code 5, stderr =   WARNING: Converting fedora/home_cache and fedora/home_cache_meta to cache pool's data and metadata volumes with metadata wiping.\n  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)\n  Device open /dev/mapper/fedora-home_cache 253:3 failed errno 2\n  Device open /dev/mapper/fedora-home_cache 253:3 failed errno 2\n  Failed to open fedora/lvol0 for wiping and zeroing.\n  Aborting. Failed to wipe start of new LV.\n")
INFO:anaconda.threading:Thread Failed: AnaTaskThread-CreateStorageLayoutTask-1 (140243715561024)

There is a problem with opening fedora/lvol0.

Comment 7 Vojtech Trefny 2020-10-09 12:09:21 UTC
Can you please also upload lvm.log from the crash? Thanks.

This might be the same issue we see on RHEL 8: https://bugzilla.redhat.com/show_bug.cgi?id=1872695

Comment 8 Vendula Poncova 2020-10-12 09:41:58 UTC
Created attachment 1720892 [details]
lvm.log

Comment 9 Vendula Poncova 2020-10-12 09:43:42 UTC
Created attachment 1720893 [details]
All logs

Comment 10 Zdenek Kabelac 2021-02-02 23:46:25 UTC
Most likely a duplicate of bug 1872695
and as such such be closed.

Comment 11 Ben Cotton 2021-02-09 15:20:10 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 34 development cycle.
Changing version to 34.

Comment 12 Ben Cotton 2022-05-12 14:59:34 UTC
This message is a reminder that Fedora Linux 34 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 34 on 2022-06-07.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
'version' of '34'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version' 
to a later Fedora Linux version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora Linux 34 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.

Comment 13 Ben Cotton 2022-06-07 20:08:36 UTC
Fedora Linux 34 entered end-of-life (EOL) status on 2022-06-07.

Fedora Linux 34 is no longer maintained, which means that it
will not receive any further security or bug fix updates. As a result we
are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.