RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 870534 - LVM tool aborted while using --config to override existing use_lvmetad setting
Summary: LVM tool aborted while using --config to override existing use_lvmetad setting
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Petr Rockai
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-26 18:36 UTC by Peter Rajnoha
Modified: 2013-02-21 08:14 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.98-2.el6
Doc Type: Bug Fix
Doc Text:
Running an LVM command in a way to prevent it from using the lvmetad cache caused it to abort instead of proceeding with scanning-based metadata discovery. The problem was due to a bad initialisation sequence.
Clone Of:
Environment:
Last Closed: 2013-02-21 08:14:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Description Peter Rajnoha 2012-10-26 18:36:33 UTC
Description of problem:
(applicable to all LVM commands, not just pvs)
# pvs --config "global{use_lvmetad=0}"
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
PV         VG   Fmt  Attr PSize   PFree  
/dev/sda        lvm2 a--  128.00m 128.00m
...
*** Error in `pvs': corrupted double-linked list: 0x0000000001fae8f0 ***
======= Backtrace: =========
/lib64/libc.so.6[0x379727f6bd]
/lib64/libc.so.6[0x3797280992]
/lib64/libc.so.6[0x379728241b]
/lib64/libc.so.6(realloc+0xed)[0x379728352d]
pvs(buffer_realloc+0x53)[0x4cdef8]
pvs(buffer_read+0x112)[0x4ceb8e]
pvs(daemon_send+0xe4)[0x4ce5b4]
pvs(daemon_send_simple_v+0xf2)[0x4ce73f]
pvs(daemon_send_simple+0xc0)[0x4ce834]
pvs(daemon_open+0x13f)[0x4ce22f]
pvs[0x4c2ba1]
pvs(lvmetad_init+0xbf)[0x4c2cad]
pvs[0x44dc3e]
pvs(refresh_toolcontext+0x1f5)[0x4512c2]
pvs(lvm_run_command+0x4d3)[0x4282ad]
pvs(lvm2_main+0x392)[0x4295b1]
pvs(main+0x20)[0x442f94]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x3797221ba5]
pvs[0x413f59]
...
Aborted (core dumped)


The exact backtrace:
(gdb) bt
#0  0x0000003797236ca5 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x0000003797238458 in __GI_abort () at abort.c:90
#2  0x000000379727872b in __libc_message (do_abort=do_abort@entry=2, fmt=fmt@entry=0x3797384360 "*** Error in `%s': %s: 0x%s ***\n") at	../sysdeps/unix/sysv/linux/libc_fatal.c:196
#3  0x000000379727f6bd in malloc_printerr (ptr=0x73f8f0, str=0x3797380ae8 "corrupted double-linked list", action=3) at malloc.c:4916
#4  malloc_consolidate (av=av@entry=0x37975be740 <main_arena>) at malloc.c:4093
#5  0x0000003797280992 in _int_malloc (av=av@entry=0x37975be740 <main_arena>, bytes=bytes@entry=1057) at malloc.c:3364
#6  0x000000379728241b in _int_realloc (av=av@entry=0x37975be740 <main_arena>, oldp=oldp@entry=0x852cb0, oldsize=oldsize@entry=48, nb=nb@entry=1072) at malloc.c:4211
#7  0x000000379728352d in __GI___libc_realloc (oldmem=0x852cc0,	bytes=1056) at malloc.c:2988
#8  0x00000000004cdef8 in buffer_realloc (buf=0x7fffffffde38, needed=1024) at config-util.c:284
#9  0x00000000004ceb8e in buffer_read (fd=5, buffer=0x7fffffffde38) at daemon-io.c:45
#10 0x00000000004ce5b4 in daemon_send (h=..., rq=...) at daemon-client.c:86
#11 0x00000000004ce73f in daemon_send_simple_v (h=..., id=0x4f86a2 "hello", ap=0x7fffffffdf48) at daemon-client.c:117
#12 0x00000000004ce834 in daemon_send_simple (h=..., id=0x4f86a2 "hello") at daemon-client.c:129
#13 0x00000000004ce22f in daemon_open (i=...) at daemon-client.c:44
#14 0x00000000004c2ba1 in lvmetad_open (socket=0x4e05ee "/run/lvm/lvmetad.socket") at ../include/lvmetad-client.h:73
#15 0x00000000004c2cad in lvmetad_init (cmd=0x733090) at cache/lvmetad.c:47
#16 0x000000000044dc3e in _process_config (cmd=0x733090) at commands/toolcontext.c:423
#17 0x00000000004512c2 in refresh_toolcontext (cmd=0x733090) at commands/toolcontext.c:1607
#18 0x00000000004282ad in lvm_run_command (cmd=0x733090, argc=0, argv=0x7fffffffe4e0) at lvmcmdline.c:1135
#19 0x00000000004295b1 in lvm2_main (argc=3, argv=0x7fffffffe4c8) at lvmcmdline.c:1556
#20 0x0000000000442f94 in main (argc=3, argv=0x7fffffffe4c8) at lvm.c:21


Version-Release number of selected component (if applicable):
lvm2-2.02.98-2.el6 (current upstream head as well:	bbff143d54b890f3b9c91b302f0322469ba56ef6) 

How reproducible:
Always while having use_lvmetad=1 in lvm.conf and using use_lvmetad=0 in --config and vice versa (so redefining it to an opposite value).

Comment 1 Petr Rockai 2012-10-30 21:42:55 UTC
Pushed a fix and a test upstream. 7c59199..09d77d0

Comment 2 Peter Rajnoha 2012-10-31 14:56:13 UTC
Just a note: lvmetad must be already running (service lvm2-lvmetad start) to trigger this error.

Comment 4 Corey Marthaler 2013-01-24 23:18:04 UTC
Fix verified in the latest rpms.

[root@qalvm-01 ~]# ps -ef | grep lvmetad
root      3771     1  0 17:09 ?        00:00:00 lvmetad
root      6491  1926  0 17:16 pts/0    00:00:00 grep lvmetad
[root@qalvm-01 ~]# pvs --config "global{use_lvmetad=0}"
  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
  PV         VG         Fmt  Attr PSize    PFree   
  /dev/sda2  vg_qalvm01 lvm2 a--    19.51g       0 
  /dev/vdc1  snapper    lvm1 a--  1024.00g 1023.70g
  /dev/vdg1  snapper    lvm1 a--  1024.00g 1024.00g


2.6.32-354.el6.x86_64
lvm2-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-libs-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-cluster-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
cmirror-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013

Comment 5 errata-xmlrpc 2013-02-21 08:14:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.