Bug 1390838 - write-behind: flush stuck by former failed write
Summary: write-behind: flush stuck by former failed write
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: write-behind
Version: 3.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1372211 1390840 1390843
Blocks: 1390837
TreeView+ depends on / blocked
 
Reported: 2016-11-02 04:31 UTC by Raghavendra G
Modified: 2016-11-29 09:37 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.8.6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1372211
Environment:
Last Closed: 2016-11-29 09:37:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Raghavendra G 2016-11-02 04:31:46 UTC
+++ This bug was initially created as a clone of Bug #1372211 +++

Description of problem:
run ltp test, when there is some error in WRITE op, the running process is stuck, the stack is:
[root@dynode1 ~]# cat /proc/16449/stack 
[<ffffffffa0436241>] wait_answer_interruptible+0x91/0xe0 [fuse]
[<ffffffffa0436653>] __fuse_request_send+0x253/0x2c0 [fuse]
[<ffffffffa04366d2>] fuse_request_send+0x12/0x20 [fuse]
[<ffffffffa043f77f>] fuse_flush+0xff/0x150 [fuse]
[<ffffffff811dc8a4>] filp_close+0x34/0x80
[<ffffffff811fce68>] put_files_struct+0x88/0xe0
[<ffffffff811fcf67>] exit_files+0x47/0x50
[<ffffffff810815d6>] do_exit+0x2b6/0xa60
[<ffffffff81081dff>] do_group_exit+0x3f/0xa0
[<ffffffff81081e74>] SyS_exit_group+0x14/0x20
[<ffffffff81646889>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

glusterdump logs:
[xlator.performance.write-behind.wb_inode]
path=/ltp-F9eG0ZSOME/rw-buffered-16436
inode=0x7fdbe8039b9c
window_conf=1048576
window_current=249856
transit-size=0
dontsync=0

[.WRITE]
request-ptr=0x7fdbe8020200
refcount=1
wound=no
generation-number=4
req->op_ret=-1
req->op_errno=116
sync-attempts=3
sync-in-progress=no
size=131072
offset=1220608
lied=-1
append=0
fulfilled=0
go=0

[.WRITE]
request-ptr=0x7fdbe8068c30
refcount=1
wound=no
generation-number=5
req->op_ret=-1
req->op_errno=116
sync-attempts=2
sync-in-progress=no
size=118784
offset=1351680
lied=-1
append=0
fulfilled=0
go=0

[.FLUSH]
request-ptr=0x7fdbe8021cd0
refcount=1
wound=no
generation-number=6
req->op_ret=0
req->op_errno=0
sync-attempts=0

gdb detail about above 3 requests:
(gdb) print *((wb_request_t *)0x7fdbe8021cd0)
$2 = {all = {next = 0x7fdbe803a608, prev = 0x7fdbe8068c30}, todo = {next = 0x7fdbe803a618, prev = 0x7fdbe8068c40}, lie = {next = 0x7fdbe8021cf0, 
    prev = 0x7fdbe8021cf0}, winds = {next = 0x7fdbe8021d00, prev = 0x7fdbe8021d00}, unwinds = {next = 0x7fdbe8021d10, prev = 0x7fdbe8021d10}, wip = {
    next = 0x7fdbe8021d20, prev = 0x7fdbe8021d20}, stub = 0x7fdbe80224dc, write_size = 0, orig_size = 0, total_size = 0, op_ret = 0, op_errno = 0, 
  refcount = 1, wb_inode = 0x7fdbe803a5f0, fop = GF_FOP_FLUSH, lk_owner = {len = 8, data = "W\322T\f\271\367y$", '\000' <repeats 1015 times>}, 
  iobref = 0x0, gen = 6, fd = 0x7fdbe800f0dc, wind_count = 0, ordering = {size = 0, off = 0, append = 0, tempted = 0, lied = 0, fulfilled = 0, 
    go = 0}}
(gdb) print *((wb_request_t *)0x7fdbe8020200)
$3 = {all = {next = 0x7fdbe8068c30, prev = 0x7fdbe803a608}, todo = {next = 0x7fdbe8068c40, prev = 0x7fdbe803a618}, lie = {next = 0x7fdbe8068c50, 
    prev = 0x7fdbe803a628}, winds = {next = 0x7fdbe8020230, prev = 0x7fdbe8020230}, unwinds = {next = 0x7fdbe8020240, prev = 0x7fdbe8020240}, wip = {
    next = 0x7fdbe8020250, prev = 0x7fdbe8020250}, stub = 0x7fdbe8062c3c, write_size = 131072, orig_size = 4096, total_size = 0, op_ret = -1, 
  op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop = GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>}, 
  iobref = 0x7fdbe80311a0, gen = 4, fd = 0x7fdbe805c89c, wind_count = 3, ordering = {size = 131072, off = 1220608, append = 0, tempted = -1, 
    lied = -1, fulfilled = 0, go = 0}}
(gdb) print *((wb_request_t *)0x7fdbe8068c30)
$4 = {all = {next = 0x7fdbe8021cd0, prev = 0x7fdbe8020200}, todo = {next = 0x7fdbe8021ce0, prev = 0x7fdbe8020210}, lie = {next = 0x7fdbe803a628, 
    prev = 0x7fdbe8020220}, winds = {next = 0x7fdbe8068c60, prev = 0x7fdbe8068c60}, unwinds = {next = 0x7fdbe8068c70, prev = 0x7fdbe8068c70}, wip = {
    next = 0x7fdbe8068c80, prev = 0x7fdbe8068c80}, stub = 0x7fdbe806746c, write_size = 118784, orig_size = 4096, total_size = 0, op_ret = -1, 
  op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop = GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>}, 
  iobref = 0x7fdbe8052b10, gen = 5, fd = 0x7fdbe805c89c, wind_count = 2, ordering = {size = 118784, off = 1351680, append = 0, tempted = -1, 
    lied = -1, fulfilled = 0, go = 0}}

--- Additional comment from Worker Ant on 2016-09-06 02:29:21 EDT ---

REVIEW: http://review.gluster.org/15380 (performance/write-behind: fix flush stuck by former failed writes) posted (#2) for review on master by Oleksandr Natalenko (oleksandr)

--- Additional comment from Worker Ant on 2016-11-01 06:06:10 EDT ---

REVIEW: http://review.gluster.org/15380 (performance/write-behind: fix flush stuck by former failed writes) posted (#3) for review on master by Ryan Ding (ryan.ding)

Comment 1 Worker Ant 2016-11-02 04:34:53 UTC
REVIEW: http://review.gluster.org/15762 (performance/write-behind: fix flush stuck by former failed writes) posted (#1) for review on release-3.8 by Raghavendra G (rgowdapp)

Comment 2 Worker Ant 2016-11-03 09:28:17 UTC
COMMIT: http://review.gluster.org/15762 committed in release-3.8 by Raghavendra G (rgowdapp) 
------
commit 7c4f26a7a966729da3f3a2d30a590dee23cae3f4
Author: Ryan Ding <ryan.ding>
Date:   Thu Sep 1 15:40:35 2016 +0800

    performance/write-behind: fix flush stuck by former failed writes
    
    the issue is happened in this case:
    assume a file is opened with fd1 and fd2.
    1. some WRITE opto fd1 got error, they were add back to 'todo' queue
       because of those error.
    2. fd2 closed, a FLUSH op is send to write-behind.
    3. FLUSH can not be unwind because it's not a legal waiter for those
       failed write(as func __wb_request_waiting_on() say). and those failed
       WRITE also can not be ended if fd1 is not closed. fd2 stuck in close
       syscall.
    
    to resolve this issue, we can change the way we determine 2 requests is
    'conflict': flush/fsync is not conflict with those write that is not
    belonged to them. so __wb_pick_winds() can wind the FLUSH op.
    
    below is some information when the stuck issue happen:
    glusterdump logs:
    [xlator.performance.write-behind.wb_inode]
    path=/ltp-F9eG0ZSOME/rw-buffered-16436
    inode=0x7fdbe8039b9c
    window_conf=1048576
    window_current=249856
    transit-size=0
    dontsync=0
    
    [.WRITE]
    request-ptr=0x7fdbe8020200
    refcount=1
    wound=no
    generation-number=4
    req->op_ret=-1
    req->op_errno=116
    sync-attempts=3
    sync-in-progress=no
    size=131072
    offset=1220608
    lied=-1
    append=0
    fulfilled=0
    go=0
    
    [.WRITE]
    request-ptr=0x7fdbe8068c30
    refcount=1
    wound=no
    generation-number=5
    req->op_ret=-1
    req->op_errno=116
    sync-attempts=2
    sync-in-progress=no
    size=118784
    offset=1351680
    lied=-1
    append=0
    fulfilled=0
    go=0
    
    [.FLUSH]
    request-ptr=0x7fdbe8021cd0
    refcount=1
    wound=no
    generation-number=6
    req->op_ret=0
    req->op_errno=0
    sync-attempts=0
    
    gdb detail about above 3 requests:
    (gdb) print *((wb_request_t *)0x7fdbe8021cd0)
    $2 = {all = {next = 0x7fdbe803a608, prev = 0x7fdbe8068c30}, todo = {next
    = 0x7fdbe803a618, prev = 0x7fdbe8068c40}, lie = {next = 0x7fdbe8021cf0,
        prev = 0x7fdbe8021cf0}, winds = {next = 0x7fdbe8021d00, prev =
    0x7fdbe8021d00}, unwinds = {next = 0x7fdbe8021d10, prev =
    0x7fdbe8021d10}, wip = {
        next = 0x7fdbe8021d20, prev = 0x7fdbe8021d20}, stub =
    0x7fdbe80224dc, write_size = 0, orig_size = 0, total_size = 0, op_ret =
    0, op_errno = 0,
      refcount = 1, wb_inode = 0x7fdbe803a5f0, fop = GF_FOP_FLUSH, lk_owner
    = {len = 8, data = "W\322T\f\271\367y$", '\000' <repeats 1015 times>},
      iobref = 0x0, gen = 6, fd = 0x7fdbe800f0dc, wind_count = 0, ordering =
    {size = 0, off = 0, append = 0, tempted = 0, lied = 0, fulfilled = 0,
        go = 0}}
    (gdb) print *((wb_request_t *)0x7fdbe8020200)
    $3 = {all = {next = 0x7fdbe8068c30, prev = 0x7fdbe803a608}, todo = {next
    = 0x7fdbe8068c40, prev = 0x7fdbe803a618}, lie = {next = 0x7fdbe8068c50,
        prev = 0x7fdbe803a628}, winds = {next = 0x7fdbe8020230, prev =
    0x7fdbe8020230}, unwinds = {next = 0x7fdbe8020240, prev =
    0x7fdbe8020240}, wip = {
        next = 0x7fdbe8020250, prev = 0x7fdbe8020250}, stub =
    0x7fdbe8062c3c, write_size = 131072, orig_size = 4096, total_size = 0,
    op_ret = -1,
      op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop =
    GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>},
      iobref = 0x7fdbe80311a0, gen = 4, fd = 0x7fdbe805c89c, wind_count = 3,
    ordering = {size = 131072, off = 1220608, append = 0, tempted = -1,
        lied = -1, fulfilled = 0, go = 0}}
    (gdb) print *((wb_request_t *)0x7fdbe8068c30)
    $4 = {all = {next = 0x7fdbe8021cd0, prev = 0x7fdbe8020200}, todo = {next
    = 0x7fdbe8021ce0, prev = 0x7fdbe8020210}, lie = {next = 0x7fdbe803a628,
        prev = 0x7fdbe8020220}, winds = {next = 0x7fdbe8068c60, prev =
    0x7fdbe8068c60}, unwinds = {next = 0x7fdbe8068c70, prev =
    0x7fdbe8068c70}, wip = {
        next = 0x7fdbe8068c80, prev = 0x7fdbe8068c80}, stub =
    0x7fdbe806746c, write_size = 118784, orig_size = 4096, total_size = 0,
    op_ret = -1,
      op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop =
    GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>},
      iobref = 0x7fdbe8052b10, gen = 5, fd = 0x7fdbe805c89c, wind_count = 2,
    ordering = {size = 118784, off = 1351680, append = 0, tempted = -1,
        lied = -1, fulfilled = 0, go = 0}}
    
    you can see they are all on 'todo' queue, and FLUSH op fd is not the
    same WRITE op fd.
    
    > Change-Id: Id687f9cd3b9f281e1a97c83f1ce981ede272b8ab
    > BUG: 1372211
    > Signed-off-by: Ryan Ding <ryan.ding>
    
    Change-Id: Id687f9cd3b9f281e1a97c83f1ce981ede272b8ab
    BUG: 1390838
    Signed-off-by: Ryan Ding <ryan.ding>
    Reviewed-on: http://review.gluster.org/15762
    Tested-by: Raghavendra G <rgowdapp>
    Reviewed-by: Raghavendra G <rgowdapp>
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 3 Niels de Vos 2016-11-29 09:37:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.6, please open a new bug report.

glusterfs-3.8.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2016-November/000217.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.