Bug 2185114 - reading backup file spins in infinite loop
Summary: reading backup file spins in infinite loop
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: 9.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: LVM Team
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-06 20:49 UTC by David Teigland
Modified: 2023-08-10 15:40 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-154213 0 None None None 2023-04-06 20:52:12 UTC

Description David Teigland 2023-04-06 20:49:33 UTC
Description of problem:

I'm repeatedly hitting this same issue where commands spin at 100% cpu:

#0  0x00007fbb2f13eaf2 in read () from /lib64/libc.so.6
#1  0x0000562241d33c3a in config_file_read_fd (cft=0x562242c50af0, dev=0x562242c50780, reason=DEV_IO_MDA_CONTENT, offset=0, size=6605, offset2=0,
    size2=0, checksum_fn=0x0, checksum=0, checksum_only=0, no_dup_node_check=0) at config/config.c:536
#2  0x0000562241d341c6 in config_file_read (cft=0x562242c50af0) at config/config.c:636
#3  0x0000562241d6bede in text_read_metadata (fid=0x562242c4ff40, file=0x562242c50028 "/etc/lvm/backup/test", vg_fmtdata=0x0, use_previous_vg=0x0,
    dev=0x0, primary_mda=0, offset=0, size=0, offset2=0, size2=0, checksum_fn=0x0, checksum=0, when=0x7ffe72f51e20, desc=0x7ffe72f51e18)
    at format_text/import.c:164
#4  0x0000562241d6c114 in text_read_metadata_file (fid=0x562242c4ff40, file=0x562242c50028 "/etc/lvm/backup/test", when=0x7ffe72f51e20,
    desc=0x7ffe72f51e18) at format_text/import.c:212
#5  0x0000562241d67835 in _vg_read_file_name (fid=0x562242c4ff40, vgname=0x562242c49c80 "test", read_path=0x562242c50028 "/etc/lvm/backup/test")
    at format_text/format-text.c:1242
#6  0x0000562241d67998 in _vg_read_file (cmd=0x5622423f0fd0, fid=0x562242c4ff40, vgname=0x562242c49c80 "test", mda=0x562242c4ffb8, vg_fmtdata=0x0,
    use_previous_vg=0x0) at format_text/format-text.c:1273
#7  0x0000562241d5e8ab in backup_read_vg (cmd=0x5622423f0fd0, vg_name=0x562242c49c80 "test", file=0x7ffe72f51f20 "/etc/lvm/backup/test")
    at format_text/archiver.c:328
#8  0x0000562241d5f8fd in check_current_backup (vg=0x562242c49b00) at format_text/archiver.c:648
#9  0x0000562241d1670f in _vgscan_single (cmd=0x5622423f0fd0, vg_name=0x562242c38ef8 "test", vg=0x562242c49b00, handle=0x562242c39008) at vgscan.c:26
#10 0x0000562241cff068 in _process_vgnameid_list (cmd=0x5622423f0fd0, read_flags=262144, vgnameids_to_process=0x7ffe72f530d0,
    arg_vgnames=0x7ffe72f530f0, arg_tags=0x7ffe72f53100, handle=0x562242c39008, process_single_vg=0x562241d1666e <_vgscan_single>) at toollib.c:2216
#11 0x0000562241cffcbc in process_each_vg (cmd=0x5622423f0fd0, argc=0, argv=0x7ffe72f53400, one_vgname=0x0, use_vgnames=0x0, read_flags=262144,
    include_internal=0, handle=0x562242c39008, process_single_vg=0x562241d1666e <_vgscan_single>) at toollib.c:2526
#12 0x0000562241d1687c in vgscan (cmd=0x5622423f0fd0, argc=0, argv=0x7ffe72f53400) at vgscan.c:55
#13 0x0000562241cd7038 in lvm_run_command (cmd=0x5622423f0fd0, argc=0, argv=0x7ffe72f53400) at lvmcmdline.c:3317
#14 0x0000562241cd87d4 in lvm2_main (argc=3, argv=0x7ffe72f533e8) at lvmcmdline.c:3847
#15 0x0000562241d19260 in main (argc=3, argv=0x7ffe72f533e8) at lvm.c:23


In config_file_read_fd(), the file read() repeatedly returns 0 which causes the for loop to never terminate.

Although the fix is trivial, I don't know exactly what causes the read() to return 0.

I don't have a trivial reproducer. The test case I'm running when I see this is multiple hosts running many commands in a shared (sanlock) VG.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:


Note You need to log in before you can comment on or make changes to this bug.