RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1020990 - fence_virtd segfault under normal usage
Summary: fence_virtd segfault under normal usage
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: fence-virt
Version: 6.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Ryan McCabe
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1164927
TreeView+ depends on / blocked
 
Reported: 2013-10-18 16:48 UTC by michal novacek
Modified: 2016-05-11 08:52 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-14 21:01:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
"fence_virtd -F -d99" output (3.90 KB, text/plain)
2013-10-18 16:48 UTC, michal novacek
no flags Details

Description michal novacek 2013-10-18 16:48:19 UTC
Created attachment 813854 [details]
"fence_virtd -F -d99" output

Description of problem:

hypervisor:bucek-03            <------>  hypervisor:doom-driver
  \_virtual:bucek-03-node01                \_virtual:doom-driver-node01

physical:doom-driver

I have two physical machines acting as hypervisors (bucek-03 and doom-driver)
each having one virtual machine named $(hostname -s)-node01. fence_virtd is
configured on both hypervisors to talk to each other using backend{} part in
config file and has 'serial' as listener{}.

fence_virtd segfaulted on bucek-03 after trying to off and on bucek-03-node01
from doom-driver-node01 using fence_virt.

fence_virtd[31689]: segfault at 0 ip 0000003c36b3383f sp 00007f5fe341ec58
error 4 in libc-2.12.so[3c36a00000+18b000]

Version-Release number of selected component (if applicable):
fence-virtd-serial-0.2.3-15.el6.x86_64

How reproducible: happened once

Steps to Reproduce:
seed additional info.

Additional info:
doom-driver-node01$ fence_virt -D /dev/ttyS1 -o list
bucek-03-node01      1893c3d4-77e6-4233-28ce-6b2c75d00981 on
doom-driver-node01$ fence_virt -D /dev/ttyS1 -H bucek-03-node01 -o status
doom-driver-node01$ fence_virt -D /dev/ttyS1 -H bucek-03-node01 -o off
doom-driver-node01$ fence_virt -D /dev/ttyS1 -H bucek-03-node01 -o status
doom-driver-node01$ echo $?
2
doom-driver-node01:~]$ fence_virt -D /dev/ttyS1 -o list
doom-driver-node01:~]$ fence_virt -D /dev/ttyS1 -H bucek-03-node01 -o on

fence_virtd segfaulted on bucek-03

Virtual machines are behind NAT.

There is a lot of 'libvir: XML-RPC error : Cannot write data: Broken pipe' which might have samething to do with that. This happened only once and I have not been able to reproduce it since.

Comment 2 Jaroslav Kortus 2013-10-18 17:11:10 UTC
any core files (/var/spool/abrt)?

Comment 3 michal novacek 2013-10-21 08:58:30 UTC
Unluckily no, because abrtd segfaulted collecting them.

Comment 8 Chris Feist 2015-10-14 21:01:24 UTC
Closing this bug since we haven't been able to reproduce this issue, please re-open if this issue is still present.


Note You need to log in before you can comment on or make changes to this bug.