Bug 1677119 - [Samba-CTDB] ctdb status is unhealthy after upgrading to glusterfs-3.12.2-42.el7rhgs.x86_64
Summary: [Samba-CTDB] ctdb status is unhealthy after upgrading to glusterfs-3.12.2-42....
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: RHGS 3.4.z Batch Update 4
Assignee: Amar Tumballi
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On: 1676904
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-14 05:43 UTC by Vivek Das
Modified: 2019-03-27 07:48 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.12.2-43
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-27 03:43:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0658 0 None None None 2019-03-27 03:44:49 UTC

Description Vivek Das 2019-02-14 05:43:35 UTC
Description of problem:
After upgrading from BU3 to latest BU4 gluster build i.e glusterfs-3.12.2-42.el7rhgs.x86_64 samba is not coming up. It remains in unhealthy state.

Version-Release number of selected component (if applicable):
glusterfs-3.12.2-42.el7rhgs.x86_64
samba-4.8.5-104.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1.BU3 setup with samba ctdb
2.Upgrade to samba-4.8.5-104.el7rhgs.x86_64
3.ctdb status

Actual results:
CTDB status is unhealthy for all nodes

Expected results:
ctdb status should be OK

Additional info:

send_fuse_err (this=this@entry=0x55efe4d28130, finh=finh@entry=0x0, error=error@entry=0)
    at fuse-bridge.c:796
#1  0x00007f50feb4e682 in fuse_err_cbk (frame=0x7f50e8010ee8, cookie=<optimized out>, 
    this=0x55efe4d28130, op_ret=0, op_errno=117, xdata=<optimized out>) at fuse-bridge.c:1761
#2  0x00007f50f3dead7d in io_stats_flush_cbk (frame=0x7f50e8011d28, cookie=<optimized out>, 
    this=<optimized out>, op_ret=0, op_errno=117, xdata=0x0) at io-stats.c:2294
#3  0x00007f5107817b0a in default_flush_cbk (frame=0x7f50e800d288, cookie=<optimized out>, 
    this=<optimized out>, op_ret=0, op_errno=117, xdata=0x0) at defaults.c:1046
#4  0x00007f50f88c968a in ra_flush_cbk (frame=0x7f50e800b458, cookie=<optimized out>, 
    this=<optimized out>, op_ret=0, op_errno=117, xdata=0x0) at read-ahead.c:565
#5  0x00007f5107817b0a in default_flush_cbk (frame=0x7f50e8011e88, cookie=<optimized out>, 
    this=<optimized out>, op_ret=0, op_errno=117, xdata=0x0) at defaults.c:1046
#6  0x00007f50f8d5c32f in dht_flush_cbk (frame=0x7f50e8011808, cookie=<optimized out>, 
    this=<optimized out>, op_ret=0, op_errno=117, xdata=0x0) at dht-inode-read.c:770
#7  0x00007f50f8fe6032 in afr_flush_cbk (frame=0x7f50e800dba8, cookie=<optimized out>, 
    this=<optimized out>, op_ret=<optimized out>, op_errno=<optimized out>, xdata=<optimized out>)
    at afr-common.c:3508
#8  0x00007f50f92264fb in client3_3_flush_cbk (req=<optimized out>, iov=<optimized out>, 
    count=<optimized out>, myframe=0x7f50e800e118) at client-rpc-fops.c:899
#9  0x00007f510753aa00 in rpc_clnt_handle_reply (clnt=clnt@entry=0x7f50f405d870, 
    pollin=pollin@entry=0x7f50ec0094c0) at rpc-clnt.c:778
#10 0x00007f510753ad6b in rpc_clnt_notify (trans=<optimized out>, mydata=0x7f50f405d8a0, 
    event=<optimized out>, data=0x7f50ec0094c0) at rpc-clnt.c:971
#11 0x00007f5107536ae3 in rpc_transport_notify (this=this@entry=0x7f50f405dbc0, 
    event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7f50ec0094c0)
---Type <return> to continue, or q <return> to quit---
    at rpc-transport.c:557
#12 0x00007f50fc128586 in socket_event_poll_in (this=this@entry=0x7f50f405dbc0, 
    notify_handled=<optimized out>) at socket.c:2322
#13 0x00007f50fc12abca in socket_event_handler (fd=15, idx=3, gen=1, data=0x7f50f405dbc0, 
    poll_in=<optimized out>, poll_out=<optimized out>, poll_err=0, event_thread_died=0 '\000')
    at socket.c:2482
#14 0x00007f51077f3870 in event_dispatch_epoll_handler (event=0x7f50fa467e70, 
    event_pool=0x55efe4d20150) at event-epoll.c:643
#15 event_dispatch_epoll_worker (data=0x55efe4d7e040) at event-epoll.c:759
#16 0x00007f51065d0dd5 in start_thread () from /lib64/libpthread.so.0
#17 0x00007f5105e98ead in clone () from /lib64/libc.so.6

Comment 2 Amar Tumballi 2019-02-14 06:13:12 UTC
please upgrade to .43 build. We had a blocker issue with .42 builds.

Comment 5 Amar Tumballi 2019-03-22 04:39:31 UTC
This happened as a transient issue between builds. No need to document anything here.

Comment 7 errata-xmlrpc 2019-03-27 03:43:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0658


Note You need to log in before you can comment on or make changes to this bug.