RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1027680 - Destroy a NPIV pool which lack of adapter's parent lead to libvirtd crash
Summary: Destroy a NPIV pool which lack of adapter's parent lead to libvirtd crash
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-07 09:35 UTC by Luwen Su
Modified: 2014-06-18 00:58 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.1.1-13.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-13 09:30:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Luwen Su 2013-11-07 09:35:10 UTC
Description of problem:
Destroy a pool which lack of elemet parent lead to libvirtd crash

Version-Release number of selected component (if applicable):
libvirt-1.1.1-11.el7.x86_64
kernel-3.10.0-41.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Prepare the pool's xml
    <pool type='scsi'>
      <name>test</name>
      <source>
        <adapter type='fc_host'   wwnn='20000024ff370144'wwpn='20000024ff370144'/>
      </source>
      <target>
        <path>/dev/disk/by-path/</path>
        <permissions>
          <mode>0755</mode>
          <owner>-1</owner>
          <group>-1</group>
        </permissions>
      </target>
    </pool>


the wwnn and wwpn pointed to a vport in my host's 
 #virsh nodedev-dumpxml scsi_host1
<device>
  <name>scsi_host1</name>
  <path>/sys/devices/pci0000:00/0000:00:09.0/0000:18:00.0/host1</path>
  <parent>pci_0000_18_00_0</parent>
  <capability type='scsi_host'>
    <host>1</host>
    <capability type='fc_host'>
      <wwnn>20000024ff370144</wwnn>
      <wwpn>20000024ff370144</wwpn>
      <fabric_wwn>2001000dec9877c1</fabric_wwn>
    </capability>
    <capability type='vport_ops'>
      <max_vports>254</max_vports>
      <vports>0</vports>
    </capability>
  </capability>
</device>

2.do
#virsh pool-define pool.xml
#virsh pool-start test
#virsh pool-destroy test
error: Failed to destroy pool test
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor


3.Gdb's backtrace

(gdb)thread apply all backtrace
Thread 11 (Thread 0x7f75cdc00700 (LWP 16443)):
#0  getHostNumber (adapter_name=adapter_name@entry=0x0, result=result@entry=0x7f75cdbffacc) at storage/storage_backend_scsi.c:579
#1  0x00007f75c87bd43f in deleteVport (adapter=...) at storage/storage_backend_scsi.c:671
#2  virStorageBackendSCSIStopPool (conn=<optimized out>, pool=<optimized out>) at storage/storage_backend_scsi.c:755
#3  0x00007f75c87b11e5 in storagePoolDestroy (obj=0x7f75a8000c50) at storage/storage_driver.c:874
#4  0x00007f75dc53b8a6 in virStoragePoolDestroy (pool=pool@entry=0x7f75a8000c50) at libvirt.c:13859
#5  0x00007f75dcf2cfa8 in remoteDispatchStoragePoolDestroy (server=<optimized out>, msg=<optimized out>, args=<optimized out>, rerr=0x7f75cdbffc90, 
    client=0x7f75deae5fd0) at remote_dispatch.h:12412
#6  remoteDispatchStoragePoolDestroyHelper (server=<optimized out>, client=0x7f75deae5fd0, msg=<optimized out>, rerr=0x7f75cdbffc90, args=<optimized out>, 
    ret=<optimized out>) at remote_dispatch.h:12390
#7  0x00007f75dc587a4a in virNetServerProgramDispatchCall (msg=0x7f75deae4830, client=0x7f75deae5fd0, server=0x7f75dead6c70, prog=0x7f75deae0570)
    at rpc/virnetserverprogram.c:435
#8  virNetServerProgramDispatch (prog=0x7f75deae0570, server=server@entry=0x7f75dead6c70, client=0x7f75deae5fd0, msg=0x7f75deae4830)
    at rpc/virnetserverprogram.c:305
#9  0x00007f75dc582888 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f75dead6c70)
    at rpc/virnetserver.c:165
#10 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f75dead6c70) at rpc/virnetserver.c:186
#11 0x00007f75dc4ac425 in virThreadPoolWorker (opaque=opaque@entry=0x7f75dead69a0) at util/virthreadpool.c:144
#12 0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#13 0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#14 0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f75cd3ff700 (LWP 16444)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6e50, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4bb in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68fc0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f75ccbfe700 (LWP 16445)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6e50, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4bb in virThreadPoolWorker (opaque=opaque@entry=0x7f75dead69a0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f75cc3fd700 (LWP 16446)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6e50, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4bb in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68fc0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f75cbbfc700 (LWP 16447)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6e50, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4bb in virThreadPoolWorker (opaque=opaque@entry=0x7f75dead69a0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f75cb3fb700 (LWP 16448)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6ee8, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4db in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68fc0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f75cabfa700 (LWP 16449)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6ee8, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
---Type <return> to continue, or q <return> to quit---
#2  0x00007f75dc4ac4db in virThreadPoolWorker (opaque=opaque@entry=0x7f75dead69a0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f75ca3f9700 (LWP 16450)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6ee8, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4db in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68fc0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f75c9bf8700 (LWP 16451)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6ee8, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4db in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68e10) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f75c93f7700 (LWP 16452)):
#0  0x00007f75d9fa16f5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f75dc4ac066 in virCondWait (c=c@entry=0x7f75dead6ee8, m=m@entry=0x7f75dead6e28) at util/virthreadpthread.c:117
#2  0x00007f75dc4ac4db in virThreadPoolWorker (opaque=opaque@entry=0x7f75dea68fc0) at util/virthreadpool.c:103
#3  0x00007f75dc4abea1 in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:161
#4  0x00007f75d9f9dde3 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f75d98c41ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f75dced0880 (LWP 16442)):
#0  0x00007f75d98b9bdd in poll () from /lib64/libc.so.6
#1  0x00007f75dc48149f in poll (__timeout=4999, __nfds=9, __fds=<optimized out>) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:628
---Type <return> to continue, or q <return> to quit---
#3  0x00007f75dc48008d in virEventRunDefaultImpl () at util/virevent.c:273
#4  0x00007f75dc583c1d in virNetServerRun (srv=0x7f75dead6c70) at rpc/virnetserver.c:1112
#5  0x00007f75dcf128dc in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1513




Actual results:
libvirtd crashed

Expected results:
Not crash

Additional info:

Comment 4 Luwen Su 2013-11-28 02:34:40 UTC
Verified with
libvirt-1.1.1-13.el7.x86_64

1.Define , start and destroy a pool that use vHBA

   <pool type='scsi'>
      <name>fc-pool</name>
      <source>
        <adapter type='fc_host'  wwnn='2101001b32a90002' wwpn='2101001b32a90003'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
        <permissions>
          <mode>0700</mode>
          <owner>0</owner>
          <group>0</group>
        </permissions>
      </target>
    </pool>

Note:the wwnn,wwpn pair number is valid and have mapped & zoned storage by the admin.

Not crash

2.Define , start and destroy a pool that use HBA

   <pool type='scsi'>
      <name>fc-pool</name>
      <source>
        <adapter type='fc_host'  wwnn='2001001b32a9da4e' wwpn='2001001b32a9da4e'/>
      </source>
      <target>
        <path>/dev/disk/by-path</path>
        <permissions>
          <mode>0700</mode>
          <owner>0</owner>
          <group>0</group>
        </permissions>
      </target>
    </pool>

Not crash

Note: the step 2 is a totally wrong action  , since the wwnn,wwpn number should be vHBAs , not HBAs. The reason why use it here , is just make sure the WRONG behaviour will not crash libvirtd.


Through above , set it VERIFIED

Comment 5 Osier Yang 2013-11-28 04:48:16 UTC
(In reply to time.su from comment #4)
> Verified with
> libvirt-1.1.1-13.el7.x86_64
> 
> 1.Define , start and destroy a pool that use vHBA
> 
>    <pool type='scsi'>
>       <name>fc-pool</name>
>       <source>
>         <adapter type='fc_host'  wwnn='2101001b32a90002'
> wwpn='2101001b32a90003'/>
>       </source>
>       <target>
>         <path>/dev/disk/by-path</path>
>         <permissions>
>           <mode>0700</mode>
>           <owner>0</owner>
>           <group>0</group>
>         </permissions>
>       </target>
>     </pool>
> 
> Note:the wwnn,wwpn pair number is valid and have mapped & zoned storage by
> the admin.
> 
> Not crash
> 
> 2.Define , start and destroy a pool that use HBA
> 
>    <pool type='scsi'>
>       <name>fc-pool</name>
>       <source>
>         <adapter type='fc_host'  wwnn='2001001b32a9da4e'
> wwpn='2001001b32a9da4e'/>
>       </source>
>       <target>
>         <path>/dev/disk/by-path</path>
>         <permissions>
>           <mode>0700</mode>
>           <owner>0</owner>
>           <group>0</group>
>         </permissions>
>       </target>
>     </pool>
> 
> Not crash
> 
> Note: the step 2 is a totally wrong action  , since the wwnn,wwpn number
> should be vHBAs , not HBAs. The reason why use it here , is just make sure
> the WRONG behaviour will not crash libvirtd.
> 

No, it's actually right, we support to define a scsi pool based on a HBA too. I'm not sure if it's caused by the misunderstanding in our previous talk, but hope this clarifies it.

Comment 6 Luwen Su 2013-11-28 09:04:41 UTC
(In reply to Osier Yang from comment #5)
> (In reply to time.su from comment #4)
> > Verified with
> > libvirt-1.1.1-13.el7.x86_64
> > 
> > 1.Define , start and destroy a pool that use vHBA
> > 
> >    <pool type='scsi'>
> >       <name>fc-pool</name>
> >       <source>
> >         <adapter type='fc_host'  wwnn='2101001b32a90002'
> > wwpn='2101001b32a90003'/>
> >       </source>
> >       <target>
> >         <path>/dev/disk/by-path</path>
> >         <permissions>
> >           <mode>0700</mode>
> >           <owner>0</owner>
> >           <group>0</group>
> >         </permissions>
> >       </target>
> >     </pool>
> > 
> > Note:the wwnn,wwpn pair number is valid and have mapped & zoned storage by
> > the admin.
> > 
> > Not crash
> > 
> > 2.Define , start and destroy a pool that use HBA
> > 
> >    <pool type='scsi'>
> >       <name>fc-pool</name>
> >       <source>
> >         <adapter type='fc_host'  wwnn='2001001b32a9da4e'
> > wwpn='2001001b32a9da4e'/>
> >       </source>
> >       <target>
> >         <path>/dev/disk/by-path</path>
> >         <permissions>
> >           <mode>0700</mode>
> >           <owner>0</owner>
> >           <group>0</group>
> >         </permissions>
> >       </target>
> >     </pool>
> > 
> > Not crash
> > 
> > Note: the step 2 is a totally wrong action  , since the wwnn,wwpn number
> > should be vHBAs , not HBAs. The reason why use it here , is just make sure
> > the WRONG behaviour will not crash libvirtd.
> > 
> 
> No, it's actually right, we support to define a scsi pool based on a HBA
> too. I'm not sure if it's caused by the misunderstanding in our previous
> talk, but hope this clarifies it.


Sorry , i must be confused with the nodedev-destroy .Thanks a lot for correct me.

Comment 8 Ludek Smid 2014-06-13 09:30:27 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.