This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2143277 - [storage] Leapp can fail when there are too many LV partitions
Summary: [storage] Leapp can fail when there are too many LV partitions
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: leapp-repository
Version: 7.9
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Leapp Notifications Bot
QA Contact: upgrades-and-conversions
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-16 14:15 UTC by Christophe Besson
Modified: 2023-09-12 13:21 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-12 13:21:11 UTC
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OAMG-7917 0 None None None 2022-11-16 14:55:50 UTC
Red Hat Issue Tracker   RHEL-3320 0 None Migrated None 2023-09-12 13:21:02 UTC
Red Hat Issue Tracker RHELPLAN-139623 0 None None None 2022-11-16 14:55:45 UTC
Red Hat Knowledge Base (Solution) 6988142 0 None None None 2022-11-29 15:01:03 UTC

Description Christophe Besson 2022-11-16 14:15:41 UTC
Description of problem:
Leapp crashes when there are too many LV partitions.
Customer had at least 55 partitions which were not useful for the IPU.
Commenting them out from fstab allows to go ahead.

Version-Release number of selected component (if applicable):
leapp-upgrade-el7toel8-0.17.0-1.el7_9.noarch

How reproducible:
Always

Steps to Reproduce:
1/ create a sparse file, it will be used as a temporary block device (here of 20GB)
# dd if=/dev/zero of=/root/block seek=40M count=1

2/ create 100 LV of 100MB under the VG "test".
# losetup /dev/loop0 /root/block
# pvcreate /dev/loop0
# vgcreate test /dev/loop0
# for i in $(seq 1 100); do lvcreate -L 100M -n lv$i test; done
# vgchange -ay test

3/ format them (here as ext3 like the customer, but I guess other fstype give the same symptoms)
# for i in $(seq 1 100); do mkfs.ext3 -F /dev/test/lv$i; done
# for i in $(seq 1 100); do mkdir /srv/fs$i; done
# for i in $(seq 1 100); do echo "/dev/test/lv$i /srv/fs$i ext3 defaults 0 0" >> /etc/fstab; done
# mount -a

4/ run a `leapp preupgrade`


Actual results:
2022-11-16 05:48:15.525 DEBUG    PID: 24941 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: External command has started: ['mount', '-t', 'overlay', 'overlay2', '-o', 'lowerdir=/srv/fs45,upperdir=/var/lib/leapp/scratch/mounts/root_srv_fs45/upper,workdir=/var/lib/leapp/scratch/mounts/root_srv_fs45/work', '/var/lib/leapp/scratch/mounts/root_srv_fs45/root_srv_fs45']
2022-11-16 05:48:15.544 DEBUG    PID: 24941 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: External command has finished: ['mount', '-t', 'overlay', 'overlay2', '-o', 'lowerdir=/srv/fs45,upperdir=/var/lib/leapp/scratch/mounts/root_srv_fs45/upper,workdir=/var/lib/leapp/scratch/mounts/root_srv_fs45/work', '/var/lib/leapp/scratch/mounts/root_srv_fs45/root_srv_fs45']
2022-11-16 05:48:15.549 DEBUG    PID: 24941 leapp.workflow.TargetTransactionFactsCollection.target_userspace_creator: External command has started: ['rm', '-rf', u'/var/lib/leapp/scratch/mounts/root_/system_overlay/srv/fs45']
Process Process-464:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python2.7/site-packages/leapp/repository/actor_definition.py", line 72, in _do_run
    actor_instance.run(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/leapp/actors/__init__.py", line 289, in run
    self.process(*args)
  File "/usr/share/leapp-repository/repositories/system_upgrade/common/actors/targetuserspacecreator/actor.py", line 52, in process
    userspacegen.perform()
  File "/usr/lib/python2.7/site-packages/leapp/utils/deprecation.py", line 42, in process_wrapper
    return target_item(*args, **kwargs)
  File "/usr/share/leapp-repository/repositories/system_upgrade/common/actors/targetuserspacecreator/libraries/userspacegen.py", line 671, in perform
    xfs_info=indata.xfs_info) as overlay:
  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/usr/share/leapp-repository/repositories/system_upgrade/common/libraries/overlaygen.py", line 229, in create_source_overlay
    cleanup_scratch(scratch_dir, mounts_dir)
  File "/usr/share/leapp-repository/repositories/system_upgrade/common/libraries/overlaygen.py", line 118, in cleanup_scratch
    api.current_logger().debug('Cleaning up mounts')
  File "/usr/lib64/python2.7/logging/__init__.py", line 1137, in debug
    self._log(DEBUG, msg, args, **kwargs)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1268, in _log
    self.handle(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1278, in handle
    self.callHandlers(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 1318, in callHandlers
    hdlr.handle(record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 749, in handle
    self.emit(record)
  File "/usr/lib/python2.7/site-packages/leapp/logger/__init__.py", line 40, in emit
    self._do_emit(log_data)
  File "/usr/lib/python2.7/site-packages/leapp/logger/__init__.py", line 45, in _do_emit
    Audit(**log_data).store()
  File "/usr/lib/python2.7/site-packages/leapp/utils/audit/__init__.py", line 87, in store
    with get_connection(db) as connection:
  File "/usr/lib/python2.7/site-packages/leapp/utils/audit/__init__.py", line 73, in get_connection
    return create_connection(cfg.get('database', 'path'))
  File "/usr/lib/python2.7/site-packages/leapp/cli/commands/upgrade/util.py", line 26, in wrapper
    return f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/leapp/utils/audit/__init__.py", line 60, in create_connection
    return _initialize_database(sqlite3.connect(path))
OperationalError: unable to open database file


=========================================================================================================
Actor target_userspace_creator unexpectedly terminated with exit code: 1 - Please check the above details

Expected results:
No crash.
Maybe an inhibitor telling to comment out partitions which are definitely not required during the IPU?
(that would also help in the XFS case where many ext4 images are created to circumvent an old xfs issue)

Additional info:
After the crash, partitions in /var/lib/leapp/scratch are still there.

To clean up the system, lazy-unmount those overlays:
# for mp in `mount | awk '/leapp.scratch/ {print $1}'`; do umount -vl $mp; done
# rm -rf /var/lib/leapp/*

Comment 3 Christophe Besson 2022-11-17 10:33:33 UTC
Just wanted to confirm the issue can also be observed with ext4 or xfs.
The issue does not occur with 8 LVs on this simulated block device, I didn't determine the limit (KO also with 60 LVs).

Comment 4 Petr Stodulka 2023-05-15 09:27:01 UTC
This could be possibly handled in future when we introduce cofiguration files leapp actors, so user could specifcy which partitions could be ignored for mounting, keeping the responsibility users in case of any further errors occurs during the DNF transaction.

Possibly this could be also improved by a check on which partitions does not contain any file tracked by RPM. But we do not want to go this way as such a check would affect the performance significantly, impacting many more users, so the prefered solution is the 1st one. However, not sure the feature will be delivered in RHEL 7 - being honest here, IPU 8 -> 9 has better chance to have the feature implemented. Keeping opened for RHEL 7 still for the planning.

Comment 7 RHEL Program Management 2023-09-12 12:28:07 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 8 RHEL Program Management 2023-09-12 13:21:11 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.