Bug 1278339 - NFS server is crashed while creating glance images on the openstack node
NFS server is crashed while creating glance images on the openstack node
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-nfs (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Niels de Vos
Depends On:
  Show dependency treegraph
Reported: 2015-11-05 05:09 EST by RajeshReddy
Modified: 2016-07-13 18:34 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-01-07 04:47:33 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-11-05 05:09:28 EST
Description of problem:
NFS server is crashed while creating glance images on the openstack node 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create 2x2 volume and mount it on openstack node using NFS
2. From the openstack node create 50 glance images 

glance image-create --name=afr_centos_2 --is-public=true --container-format=ovf  --container bare --file /root/CentOS-7-x86_64-GenericCloud-1503.qcow2 --disk-format=qcow2

3. While creation of images is going on NFS server is crashed and back trace is given below 

[root@rhs-client18 core]# gdb /usr/sbin/glusterfs core.1161.1446714159.dump
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-64.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done.

warning: core file may not match specified executable file.
[New LWP 1215]
[New LWP 1166]
[New LWP 1164]
[New LWP 1214]
[New LWP 1167]
[New LWP 1165]
[New LWP 1163]
[New LWP 1161]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/'.
Program terminated with signal 11, Segmentation fault.
#0  nfs3_access (req=req@entry=0x7f9ce80fc06c, fh=fh@entry=0x7f9cdbffeae0, accbits=<optimized out>) at nfs3.c:1680
1680	                nfs3_log_common_res (rpcsvc_request_xid (req),
Missing separate debuginfos, use: debuginfo-install glibc-2.17-78.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.12.2-14.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libcom_err-1.42.9-7.el7.x86_64 libgcc-4.8.3-9.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 libuuid-2.23.2-22.el7_1.x86_64 openssl-libs-1.0.1e-42.el7_1.9.x86_64 pcre-8.32-14.el7.x86_64 sssd-client-1.12.2-58.el7_1.6.x86_64 xz-libs-5.1.2-9alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0  nfs3_access (req=req@entry=0x7f9ce80fc06c, fh=fh@entry=0x7f9cdbffeae0, accbits=<optimized out>) at nfs3.c:1680
#1  0x00007f9ce3722024 in nfs3svc_access (req=0x7f9ce80fc06c) at nfs3.c:1710
#2  0x00007f9cf5ca5549 in rpcsvc_handle_rpc_call (svc=0x7f9ce401ee20, trans=trans@entry=0x7f9cd4003590, msg=msg@entry=0x7f9cd4001570) at rpcsvc.c:703
#3  0x00007f9cf5ca57ab in rpcsvc_notify (trans=0x7f9cd4003590, mydata=<optimized out>, event=<optimized out>, data=0x7f9cd4001570) at rpcsvc.c:797
#4  0x00007f9cf5ca7883 in rpc_transport_notify (this=this@entry=0x7f9cd4003590, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f9cd4001570) at rpc-transport.c:545
#5  0x00007f9ceab72506 in socket_event_poll_in (this=this@entry=0x7f9cd4003590) at socket.c:2236
#6  0x00007f9ceab753f4 in socket_event_handler (fd=fd@entry=18, idx=idx@entry=8, data=0x7f9cd4003590, poll_in=1, poll_out=0, poll_err=0) at socket.c:2349
#7  0x00007f9cf5f3e8ba in event_dispatch_epoll_handler (event=0x7f9cdbffee80, event_pool=0x7f9cf8102d10) at event-epoll.c:575
#8  event_dispatch_epoll_worker (data=0x7f9ce406b600) at event-epoll.c:678
#9  0x00007f9cf4d45df5 in start_thread () from /lib64/libpthread.so.0
#10 0x00007f9cf468c1ad in clone () from /lib64/libc.so.6

Actual results:

Expected results:
NFS server should not crash

Additional info:

[root@rhs-client19 images]# gluster vol status afr2x2 
Status of volume: afr2x2
Gluster process                             TCP Port  RDMA Port  Online  Pid
Brick rhs-client18.lab.eng.blr.redhat.com:/
rhs/brick3/afr2x2                           49156     0          Y       2106 
Brick rhs-client19.lab.eng.blr.redhat.com:/
rhs/brick3/afr2x2                           49155     0          Y       24225
Brick rhs-client18.lab.eng.blr.redhat.com:/
rhs/brick4/afr2x2                           49157     0          Y       2124 
Brick rhs-client19.lab.eng.blr.redhat.com:/
rhs/brick4/afr2x2                           49156     0          Y       24243
NFS Server on localhost                     2049      0          Y       25016
Self-heal Daemon on localhost               N/A       N/A        Y       25024
NFS Server on rhs-client18.lab.eng.blr.redh
at.com                                      2049      0          Y       2571 
Self-heal Daemon on rhs-client18.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       2583 
Task Status of Volume afr2x2
There are no active volume tasks
[root@rhs-client19 images]# gluster vol info afr2x2
Volume Name: afr2x2
Type: Distributed-Replicate
Volume ID: dbf7ab58-21a1-4951-b8ae-44e3aaa4c0ea
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Brick1: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick3/afr2x2
Brick2: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick3/afr2x2
Brick3: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick4/afr2x2
Brick4: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick4/afr2x2
Options Reconfigured:
performance.readdir-ahead: on

client IP: rhs-client9.lab.eng.blr.redhat.com
Comment 2 RajeshReddy 2015-11-05 05:24:37 EST
sosreports are avilable @/home/repo/sosreports/bug.1278339 on rhsqe-repo.lab.eng.blr.redhat.com
Comment 4 RajeshReddy 2015-12-08 06:07:23 EST
Tested with glusterfs-server-3.7.5-9 and not hitting the reported issue
Comment 5 Jiffin 2016-01-07 02:34:56 EST
If the issue is not reproducing, can u please close this bug ?
Comment 6 RajeshReddy 2016-01-07 04:47:33 EST
Tested with glusterfs-server-3.7.5-14 and not hitting the reported issue

Note You need to log in before you can comment on or make changes to this bug.