Bug 2309350

Summary: [Fedora41][FW1060.11][BTC] Reinstallation failing on the guest that was having LVM THIN POOL!
Product: [Fedora] Fedora Reporter: IBM Bug Proxy <bugproxy>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: high Docs Contact:
Priority: unspecified    
Version: 41CC: agk, anaconda-maint, anprice, aperotti, blivet-maint-list, bmarzins, bmr, bugproxy, cfeist, dan, dlehman, kkoukiou, kzak, lvm-team, mcsontos, mkolman, prajnoha, rvykydal, sarwrigh, slavik.vladimir, sthoufee, vtrefny, w, zkabelac
Target Milestone: ---   
Target Release: ---   
Hardware: ppc64le   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-03-06 17:59:54 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
GUEST XML(whole disk based)
none
storage.log
none
anaconda.log
none
Attaching all the logs here in a zip folder
none
successful installation logs
none
All the logs files none

Description IBM Bug Proxy 2024-09-03 06:40:30 UTC

Comment 1 IBM Bug Proxy 2024-09-03 06:40:42 UTC
== Comment: #0 - Anushree Mathur <Anushree.Mathur2> - 2024-09-01 15:38:03 ==
HOST DETAILS:
OS: Fedora41
uname -a
Linux localhost.localdomain 6.11.0-0.rc5.43.fc41.ppc64le

libvirtd --version
libvirtd (libvirt) 10.6.0

qemu-system-ppc64 --version
QEMU emulator version 9.0.93 (qemu-9.1.0-0.2.rc3.fc41)
Copyright (c) 2003-2024 Fabrice Bellard and the QEMU Project developers


GUEST DETAILS:
OS: Fedora41
uname -a
Linux localhost.localdomain 6.11.0-0.rc5.43.fc41.ppc64le

cat /proc/cmdline 
BOOT_IMAGE=(ieee1275/disk1,msdos2)/vmlinuz-6.11.0-0.rc5.43.fc41.ppc64le root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/root crashkernel=4096M xive=off

Guest was already installed with LVM THIN POOL, i tried reinstallation on the same guest and it failed as follow in 2 scenarios- 
i) when the guest was installed with whole disk:

5) [!] Installation Destination          6) [x] Network configuration                                
       (Processing...)                          (Connected: enp0s1)                                  
7) [!] Root password                     8) [!] User creation                                        
       (Root account is disabled)               (No user will be created)                            
                                                                                                     
Please make a selection from the above ['b' to begin installation, 'q' to quit,                      
'r' to refresh]:                                                                                     
An unknown error has occured, look at the /tmp/anaconda-tb* file(s) for more details                 
                                                                                                     
                                                                                                     
n _get_method_reply                                                                                  
    return self._handle_method_error(error)                                                          
  File "/usr/lib/python3.13/site-packages/dasbus/client/handler.py", line 450, in _call_method       
    return self._get_method_reply(                                                                   
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/__init__.py", line 46, in sync_run_task
    task_proxy.Finish()                                                                              
  File "/usr/lib64/python3.13/site-packages/pyanaconda/ui/lib/storage.py", line 96, in reset_storage 
    sync_run_task(task_proxy)                                                                        
  File "/usr/lib64/python3.13/threading.py", line 992, in run                                        
    self._target(*self._args, **self._kwargs)                                                        
  File "/usr/lib64/python3.13/site-packages/pyanaconda/core/threads.py", line 280, in run            
    threading.Thread.run(self)                                                                       
pyanaconda.modules.common.errors.general.AnacondaError: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusEx
ception: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!')


What do you want to do now?                        
1) Report Bug                                      
2) Run shell                                       
3) Debug                                           
4) Quit   

ii) when guest was installed with qcow2s:
Error

 An error occurred while activating your storage configuration.
 
 Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0'
 object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException:
 ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Activation of logical volume
 fedora/pool00 is prohibited while logical volume fedora/pool00_tmeta is
 active.')

Press ENTER to exit: 

Steps to reproduce:
1) virsh define guest
2) virsh start guest --console
3) Installation will start.
4) Choose the LVM thin provision option in the automatic disk partitioning.
5) After installation restart the installation on the same guest.

ACTUAL O/P :
Installation will fail once it reaches the installer as pasted above.

Expected O/P
It should install the guest 

Thanks,
Anushree Mathur

== Comment: #2 - Anushree Mathur <Anushree.Mathur2> - 2024-09-03 00:59:52 ==


== Comment: #3 - Anushree Mathur <Anushree.Mathur2> - 2024-09-03 01:01:15 ==
Hi Seeteena,
Attaching storage and anaconda logs here.

Thanks

== Comment: #4 - SEETEENA THOUFEEK <sthoufee.com> - 2024-09-03 01:36:28 ==
DEBUG:anaconda.modules.storage.storage:Created the partitioning AUTOMATIC.
DEBUG:dasbus.connection:Publishing an object at /org/fedoraproject/Anaconda/Modules/Storage/Partitioning/1.
INFO:anaconda.core.threads:Thread Failed: AnaTaskThread-ScanDevicesTask-1 (140735506608384)
ERROR:anaconda.modules.common.task.task:Thread AnaTaskThread-ScanDevicesTask-1 has failed: Traceback (most recent call last):
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 1250, in wrapped
    ret = orig_obj(*args, **kwargs)
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 865, in lvm_lvactivate
    return _lvm_lvactivate(vg_name, lv_name, ignore_skip, shared, extra)
gi.repository.GLib.GError: g-io-error-quark: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!') (36)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2763, in _setup
    blockdev.lvm.lvactivate(self.vg.name, self._name, ignore_skip=ignore_skip_activation)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 1272, in wrapped
    raise transform[1](msg)
gi.overrides.BlockDev.LVMError: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib64/python3.13/site-packages/pyanaconda/core/threads.py", line 280, in run
    threading.Thread.run(self)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/usr/lib64/python3.13/threading.py", line 992, in run
    self._target(*self._args, **self._kwargs)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task.py", line 94, in _thread_run_callback
    self._task_run_callback()
    ~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task.py", line 107, in _task_run_callback
    self._set_result(self.run())
                     ~~~~~~~~^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/reset.py", line 64, in run
    self._reset_storage(self._storage)
    ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/reset.py", line 84, in _reset_storage
    storage.reset()
    ~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/devicetree/model.py", line 265, in reset
    super().reset(cleanup_only=cleanup_only)
    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/blivet.py", line 156, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)
    ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 444, in populate
    self._populate()
    ~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 488, in _populate
    self.handle_device(dev)
    ~~~~~~~~~~~~~~~~~~^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 307, in handle_device
    self.handle_format(info, device)
    ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 335, in handle_format
    helper_class(self, info, device).run()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 434, in run
    self._update_lvs()
    ~~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 340, in _update_lvs
    new_lv = add_lv(lv)
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 238, in add_lv
    add_required_lv(pool_device_name, "failed to look up thin pool")
    ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 169, in add_required_lv
    new_lv = add_lv(lv_info[name])
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 289, in add_lv
    lv_device.setup()
    ~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2599, in decorated
    return meth(self, *args, **kwargs)  # pylint: disable=not-callable
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2721, in setup
    return DMDevice.setup(self, orig)
           ~~~~~~~~~~~~~~^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/storage.py", line 456, in setup
    self._setup(orig=orig)
    ~~~~~~~~~~~^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2765, in _setup
    raise errors.LVMError(err)
blivet.errors.LVMError: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!')

INFO:anaconda.core.threads:Thread Done: AnaTaskThread-ScanDevicesTask-1 (140735506608384)
WARNING:dasbus.server.handler:The call org.fedoraproject.Anaconda.Task.Finish has failed with an exception:
Traceback (most recent call last):
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 1250, in wrapped
    ret = orig_obj(*args, **kwargs)
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 865, in lvm_lvactivate
    return _lvm_lvactivate(vg_name, lv_name, ignore_skip, shared, extra)
gi.repository.GLib.GError: g-io-error-quark: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!') (36)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2763, in _setup
    blockdev.lvm.lvactivate(self.vg.name, self._name, ignore_skip=ignore_skip_activation)
    ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/gi/overrides/BlockDev.py", line 1272, in wrapped
    raise transform[1](msg)
gi.overrides.BlockDev.LVMError: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.13/site-packages/dasbus/server/handler.py", line 455, in _method_callback
    result = self._handle_call(
        interface_name,
    ...<2 lines>...
        **additional_args
    )
  File "/usr/lib/python3.13/site-packages/dasbus/server/handler.py", line 265, in _handle_call
    return handler(*parameters, **additional_args)
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task_interface.py", line 114, in Finish
    self.implementation.finish()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task.py", line 173, in finish
    thread_manager.raise_if_error(self._thread_name)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/core/threads.py", line 171, in raise_if_error
    raise exc_info[1]
  File "/usr/lib64/python3.13/site-packages/pyanaconda/core/threads.py", line 280, in run
    threading.Thread.run(self)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^
  File "/usr/lib64/python3.13/threading.py", line 992, in run
    self._target(*self._args, **self._kwargs)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task.py", line 94, in _thread_run_callback
    self._task_run_callback()
    ~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/common/task/task.py", line 107, in _task_run_callback
    self._set_result(self.run())
                     ~~~~~~~~^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/reset.py", line 64, in run
    self._reset_storage(self._storage)
    ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/reset.py", line 84, in _reset_storage
    storage.reset()
    ~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib64/python3.13/site-packages/pyanaconda/modules/storage/devicetree/model.py", line 265, in reset
    super().reset(cleanup_only=cleanup_only)
    ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/blivet.py", line 156, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)
    ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 444, in populate
    self._populate()
    ~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 488, in _populate
    self.handle_device(dev)
    ~~~~~~~~~~~~~~~~~~^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 307, in handle_device
    self.handle_format(info, device)
    ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/populator/populator.py", line 335, in handle_format
    helper_class(self, info, device).run()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 434, in run
    self._update_lvs()
    ~~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 340, in _update_lvs
    new_lv = add_lv(lv)
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 238, in add_lv
    add_required_lv(pool_device_name, "failed to look up thin pool")
    ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 169, in add_required_lv
    new_lv = add_lv(lv_info[name])
  File "/usr/lib/python3.13/site-packages/blivet/populator/helpers/lvm.py", line 289, in add_lv
    lv_device.setup()
    ~~~~~~~~~~~~~~~^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2599, in decorated
    return meth(self, *args, **kwargs)  # pylint: disable=not-callable
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2721, in setup
    return DMDevice.setup(self, orig)
           ~~~~~~~~~~~~~~^^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/storage.py", line 456, in setup
    self._setup(orig=orig)
    ~~~~~~~~~~~^^^^^^^^^^^
  File "/usr/lib/python3.13/site-packages/blivet/threads.py", line 49, in run_with_lock
    return m(*args, **kwargs)
  File "/usr/lib/python3.13/site-packages/blivet/devices/lvm.py", line 2765, in _setup
    raise errors.LVMError(err)
blivet.errors.LVMError: Failed to call the 'Activate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora/pool00 failed (status:64). Manual repair required!')
INFO:anaconda.modules.storage.bootloader.factory:Created the boot loader IPSeriesGRUB2.
DEBUG:anaconda.modules.common.base.base:Generating kickstart...

Comment 2 IBM Bug Proxy 2024-09-03 06:40:58 UTC
Created attachment 2045254 [details]
GUEST XML(whole disk based)

Comment 3 IBM Bug Proxy 2024-09-03 06:41:00 UTC
Created attachment 2045255 [details]
storage.log

Comment 4 IBM Bug Proxy 2024-09-03 06:41:02 UTC
Created attachment 2045256 [details]
anaconda.log

Comment 5 IBM Bug Proxy 2024-09-03 06:41:05 UTC
------- Comment From sthoufee.com 2024-09-03 02:38 EDT-------
The issue is seen on Fedora 41.

Comment 6 Katerina Koukiou 2024-09-04 13:00:02 UTC
This is a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1439744 and there is a Fedora QA test case for it (http://fedoraproject.org/wiki/QA:Testcase_partitioning_custom_lvmthin
)

Keeping it open as the old report is EOL Closed.

Comment 7 Katerina Koukiou 2024-09-04 15:14:42 UTC
Vojtech can you please check this this bug? From first look it sounds like the LVM pool is corrupt/damaged from the previous installation. However it's not the first bug with clean reproducer for this.

Comment 8 Vojtech Trefny 2024-09-06 12:29:32 UTC
> From first look it sounds like the LVM pool is corrupt/damaged from the previous installation.

Yes, that's probably right. We'll need more logs to find out what/how corrupted the pool. From storage.log it looks like the LVs were not activated during boot so the pool was probably corrupted even before the second installation started.

IBM can you please also upload the lvm.log and journal from the failed installation (both are also stored in /tmp during the installation)? If you try to boot the installed system does it work?

Comment 9 IBM Bug Proxy 2024-09-09 09:40:49 UTC
------- Comment From Anushree.Mathur2 2024-09-09 05:33 EDT-------
Hi,
Thanks for looking into this. I am recreating the issue, will upload all the remaining logs too!

Comment 10 IBM Bug Proxy 2024-09-09 10:10:58 UTC
------- Comment From Anushree.Mathur2 2024-09-09 06:07 EDT-------
Copied the logs at the location as mentioned below-
scp -r /tmp dump.stglabs.ibm.com:/home/dump/dumps/BZ_208798/.

Anushree Mathur

Comment 11 IBM Bug Proxy 2024-09-10 06:40:52 UTC
Created attachment 2046040 [details]
Attaching all the logs here in a zip folder


------- Comment on attachment From Anushree.Mathur2 2024-09-10 02:36 EDT-------


Password for this zip folder is : 123456

Thanks,
Anushree Mathur

Comment 12 IBM Bug Proxy 2024-09-24 06:50:37 UTC
------- Comment From sthoufee.com 2024-09-24 02:40 EDT-------
distro, any update here?

Comment 13 Vojtech Trefny 2024-09-24 12:59:41 UTC
Thank you for the logs. It looks like the thinpool is corrupted even before the installation starts:

Sep 09 09:56:09 localhost dracut-initqueue[2263]: Scanning devices sda3  for LVM volume groups
Sep 09 09:56:09 localhost dracut-initqueue[2286]: Found volume group "fedora" using metadata type lvm2
Sep 09 09:56:10 localhost dracut-initqueue[2288]: Check of pool fedora/pool00 failed (status:64). Manual repair required!

Can you please upload the same logs but from the first installation (the successful one). Also if you try to boot the installed system, does it boot?

Comment 14 IBM Bug Proxy 2024-09-27 05:00:36 UTC
Created attachment 2049034 [details]
successful installation logs


------- Comment on attachment From Anushree.Mathur2 2024-09-27 00:58 EDT-------


Hi vtrefny,
I have attached the logs for the successful installation logs!

Thanks
Anushree Mathur

Comment 15 IBM Bug Proxy 2024-10-14 17:40:42 UTC
------- Comment From sarahw.com 2024-10-14 13:37 EDT-------
Anushree - When the installation succeeded were you able to boot the system?

Comment 16 Sarah Wright (IBM) 2024-10-14 18:54:46 UTC
@vtrefny - Is there any further update on this issue?

Comment 17 IBM Bug Proxy 2024-10-28 17:20:42 UTC
------- Comment From sarahw.com 2024-10-28 13:13 EDT-------
Putting this bug into NEEDINFO state - Update requested from original submitter.

Comment 18 IBM Bug Proxy 2024-11-04 17:50:39 UTC
------- Comment From sarahw.com 2024-11-04 12:42 EDT-------
As you have a successful install and were able to boot with that, please confirm if there is any further investigation needed on this bug.

Can the original issue be reproduced or was it a one-time occurrence?

Comment 19 IBM Bug Proxy 2024-11-22 05:20:29 UTC
------- Comment From Anushree.Mathur2 2024-11-22 00:18 EDT-------
Hi Luciano,
Yes i tried the reinstallation after HTX issue which you have mentioned [https://bugzilla.linux.ibm.com/show_bug.cgi?id=208796]!
I was seeing the same issue even without the HTX failure too. I will provide the logs for that scenario also!

Thanks,
Anushree Mathur

Comment 20 IBM Bug Proxy 2024-11-22 15:40:29 UTC
------- Comment From chavez.com 2024-11-22 10:33 EDT-------
(In reply to comment #28)
> Hi Luciano,
> Yes i tried the reinstallation after HTX issue which you have mentioned
> [https://bugzilla.linux.ibm.com/show_bug.cgi?id=208796]!
> I was seeing the same issue even without the HTX failure too. I will provide
> the logs for that scenario also!
>
> Thanks,
> Anushree Mathur

Hello Red Hat. LTC bug 298796 was an internal bug where the HTX test suite was not aware of the block devices for the data, metadata and pool belonging to LVM Thin Pools and so it used them as part of its storage exerciser and corrupted them. I asked this question because I saw the comment in this bug from Red Hat. The HTX development team is working on ways to identify those LVM Thin Pool block devices in their setup program in order to exclude them as devices that are free to use for testing.

>> From first look it sounds like the LVM pool is corrupt/damaged from the previous installation.

> Yes, that's probably right. We'll need more logs to find out what/how corrupted the pool. From storage.log it looks like the LVs were not activated during boot so the pool was probably corrupted even before the second installation started

So that appears to be the case.

Hi Anushree,

> I was seeing the same issue even without the HTX failure too. I will provide
> the logs for that scenario also!

What issue are we talking about? The installation issue? Unless the LVM Thin pool devices were once again with a fresh LVM Thin Pool, they would still remain corrupted and continue to pose a problem for the installer.

Comment 21 IBM Bug Proxy 2024-11-27 04:00:29 UTC
------- Comment From Anushree.Mathur2 2024-11-26 22:55 EDT-------
Hi Luciano,
The logs that i mentioned before was after the HTX crash i tried to reinstall on the same device! Here I am providing the logs for the reinstallation of guest when no stress running or any crash happened on the guest!

Steps i tried to recreate the issue is:

1) virsh start guest --console
Do the installation of guest with storage configuration as LVM THIN Provisioning!

2) Boot to the guest

3) Now do the reinstallation of guest with virt-install command
virt-install --name Fedora41 --ram 2048 --disk path=/home/Anu/bug2.qcow2,size=20 --vcpus 4 --os-type linux --os-variant generic --network bridge=virbr0 --graphics none --console pty,target_type=serial --cdrom /home/Anu/Fedora-Server-dvd-ppc64le-41-1.4.iso

It fails after starting the installation with storage configuration chosen as LVM

Installation

1) [x] Language settings                 2) [x] Time settings
(English (United States))                (America/Chicago timezone)
3) [x] Installation source               4) [x] Software selection
(Auto-detected source)                   (Fedora Server Edition)
5) [x] Installation Destination          6) [x] Network configuration
(Automatic partitioning                  (Connected: enp0s1)
selected)
7) [x] Root password                     8) [ ] User creation
(Root password is set)                   (No user will be created)

'r' to refresh]: b
================================================================================
================================================================================
Progress

Setting up the installation environment
.
Configuring storage
================================================================================
================================================================================

Will upload all the logs again in this bug!

Comment 22 IBM Bug Proxy 2024-11-27 04:30:28 UTC
Created attachment 2059995 [details]
All the logs files


------- Comment on attachment From Anushree.Mathur2 2024-11-26 23:29 EDT-------


Attaching all the log files that i found in anaconda shell! 1
1 more point to be added when i do reinstallation of the guest using virsh command i am not seeing this issue only when i use virt-install command then only i see this issue!

Thanks 
Anushree Mathur

Comment 23 IBM Bug Proxy 2024-11-27 05:20:26 UTC
------- Comment From sthoufee.com 2024-11-27 00:14 EDT-------
I did a quick search about this error and hit one.

https://bugzilla.redhat.com/show_bug.cgi?id=2238099

LVM tihn provisioning installs on Fedora 39 currently fail with an error like:

ERROR:anaconda.modules.storage.installation:Failed to create storage layout: Failed to call the 'LvCreate' method on the '/com/redhat/lvmdbus1/ThinPool/0' object: GDBus.Error:org.freedesktop.DBus.Python.dbus.exceptions.DBusException: ('com.redhat.lvmdbus1.Lv', 'Exit code 5, stderr = Check of pool fedora_fedora/00 failed (status:64). Manual repair required!, Failed to activate thin pool fedora_fedora/00.')

this is fixed in Rawhide by device-mapper-persistent-data-1.0.6-2.fc40 , but that's not built for F39. Filing this bug as it would be nice to fix this for F39 Beta, as it's an install-time bug - I dunno how many people are doing thinp installs on ppc64le, but hey, if someone is, they'd be happy we fixed this...

https://bodhi.fedoraproject.org/updates/FEDORA-2023-b45924046a

broke LVM thin provisioning installations on ppc64le.
....................................................................

Fedora to check the logs and provide comment why LVM thin provisioning installation fails on Fedora 41.

Comment 24 Vojtech Trefny 2024-11-27 09:34:34 UTC
Thank you for the logs, this now looks like a different issue: "Activation of logical volume fedora/pool00 is prohibited while logical volume fedora/pool00_tmeta is active" @zkabelac can you please look at the logs in comment #22, thanks.

Comment 25 Zdenek Kabelac 2024-11-27 10:14:46 UTC
So it looks like we are facing some race with  auto activation.

Autoactivation with lvm2 now happens through udev rule 69-dm-lvm.rules /usr/bin/systemd-run --unit lvm-activate-$env{LVM_VG_NAME_COMPLETE}

It likely fires at the same time Anaconda tries to activate LVs on its own -  activation of LVs takes only READ lock thus can run in parallel but in this case it happens 2 commands end up trying to activate thin-pool - so there seems to be same room for improvement in  lvm2 activation itself - but as for now -

Anaconda should avoid using  auto activation in its environment -

lvm.conf   has  setting   global { event_activation = 1 }     (as default - anaconda should use 0)

It's also possible to mask the systemd service   'lvm-activate-@'    (/etc/systemd/system/lvm-activate-@  -> /dev/null)

Comment 26 Vojtech Trefny 2024-11-27 13:43:13 UTC
*** Bug 2328479 has been marked as a duplicate of this bug. ***

Comment 27 IBM Bug Proxy 2024-12-09 05:10:35 UTC
------- Comment From hariharan.ts 2024-12-09 00:01 EDT-------
Any updates here

Comment 28 IBM Bug Proxy 2024-12-12 06:10:31 UTC
------- Comment From hariharan.ts 2024-12-12 01:07 EDT-------
@Redhat Any updates here Please?

Comment 29 Vojtech Trefny 2024-12-13 08:13:22 UTC
(In reply to IBM Bug Proxy from comment #28)
> ------- Comment From hariharan.ts 2024-12-12 01:07 EDT-------
> @Redhat Any updates here Please?

We have identified the underlying problem (see comment #25) -- there is a race condition between LVM autoactivation activating the thin pool and the installer trying to activate the thin pool. Unfortunately recent changes in LVM made this race condition possible and we'll need to workaround this in the installer environment by disabling autoactivation and making some additional changes in our backend libraries. This bug is planned to be worked on in the next quarter.

Comment 30 Zdenek Kabelac 2025-01-28 09:50:50 UTC
This issue can be related to the lvm2 resent regression bug - where in some case there was incorrectly used cache DM table content before lock was takes.

So if 2 commands were 'fighting' over the same LV - the result could have been been left in the invalid state.

Comment 31 Marian Csontos 2025-01-31 17:51:59 UTC
The lvm2-2.03.30-3.fc42 package with the patches necessary to fix the race condition was just built, so should get into compose soon. Please, if the issue was reproducible, let us know if this helps

Comment 32 Fedora Update System 2025-03-06 11:06:59 UTC
FEDORA-2025-e8f2823c9c (anaconda-43.5-1.fc43 and anaconda-webui-26-1.fc43) has been submitted as an update to Fedora 43.
https://bodhi.fedoraproject.org/updates/FEDORA-2025-e8f2823c9c

Comment 33 Fedora Update System 2025-03-06 17:59:54 UTC
FEDORA-2025-e8f2823c9c (anaconda-43.5-1.fc43 and anaconda-webui-26-1.fc43) has been pushed to the Fedora 43 stable repository.
If problem still persists, please make note of it in this bug report.

Comment 34 IBM Bug Proxy 2025-04-11 04:50:35 UTC
------- Comment From Anushree.Mathur2 2025-04-11 00:47 EDT-------
HOST ENV:
OS : Fedora42
kernel : 6.14.0-63.fc42.ppc64le
qemu : QEMU emulator version 9.2.3 (qemu-9.2.3-1.fc42)
libvirt : libvirtd (libvirt) 11.0.0

GUEST ENV:
OS : Fedora42
kernel : 6.14.0-63.fc42.ppc64le

I have tried the exact same scenario and the issue has been fixed. I am not seeing the crash again on the guest. Here is my analysis:

Steps I tried:
1) Start the installation on the guest with fedora42
2) Install it with lvm thin pool

Please make a selection from the above ['b' to begin installation, 'q' to quit,
'r' to refresh]: 5
Probing storage...
================================================================================
================================================================================
Installation Destination

1) [x] DISK: 20 GiB (vda)

1 disk selected; 20 GiB capacity; 1.97 MiB free

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]:
c
================================================================================
================================================================================
Partitioning Options

1) [ ] Replace Existing Linux system(s)
2) [x] Use All Space
3) [ ] Use Free Space
4) [ ] Manually assign mount points

Installation requires partitioning of your hard drive. Select what space to use
for the install target or manually assign mount points.

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: c
================================================================================
================================================================================
Partition Scheme Options

1) [ ] Standard Partition
2) [ ] Btrfs
3) [x] LVM
4) [ ] LVM Thin Provisioning

Select a partition scheme configuration.

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: 4
================================================================================
================================================================================
Partition Scheme Options

1) [ ] Standard Partition
2) [ ] Btrfs
3) [ ] LVM
4) [x] LVM Thin Provisioning

Select a partition scheme configuration.

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: c
Saving storage configuration...
Checking storage configuration...

3) After installation, again install on the same qcow2 with lvm

Please make a selection from the above ['b' to begin installation, 'q' to quit,
'r' to refresh]: 5
Probing storage...
================================================================================
================================================================================
Installation Destination

1) [x] DISK: 20 GiB (vda)

1 disk selected; 20 GiB capacity; 1.97 MiB free

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]:
c
================================================================================
================================================================================
Partitioning Options

1) [ ] Replace Existing Linux system(s)
2) [x] Use All Space
3) [ ] Use Free Space
4) [ ] Manually assign mount points

Installation requires partitioning of your hard drive. Select what space to use
for the install target or manually assign mount points.

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: c
================================================================================
================================================================================
Partition Scheme Options

1) [ ] Standard Partition
2) [ ] Btrfs
3) [x] LVM
4) [ ] LVM Thin Provisioning

Select a partition scheme configuration.

Please make a selection from the above ['c' to continue, 'q' to quit, 'r' to
refresh]: c
Saving storage configuration...
Checking storage configuration...

This worked fine. Thanks.Closing the bug now