Bug 1122794 - Uninterruptible processes when accessing a file
Summary: Uninterruptible processes when accessing a file
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.4.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-24 06:26 UTC by liuyong
Modified: 2015-10-07 12:10 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-07 12:10:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
attach glusterfs client statedump info. (129.08 KB, application/x-zip-compressed)
2014-07-24 06:26 UTC, liuyong
no flags Details

Description liuyong 2014-07-24 06:26:48 UTC
Created attachment 920447 [details]
attach glusterfs client statedump info.

Description of problem:
I have a kvm virtual machine system disk, when a query message with "qemu-img info /path/to/file", causes the caller to hang, and can not end with a kill -9 the process.

Version-Release number of selected component (if applicable):

glusterfs 3.4.0 built on Aug  6 2013 11:17:07
fuse-libs-2.8.3-4.el6.x86_64
glusterfs-fuse-3.4.0-8.el6.x86_64

[root@ss01 ~]# gluster volume info
 
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 2ddc1de4-e4c3-4be5-ae07-d12d98c60f5a
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 172.16.21.24:/export/brick1
Brick2: 172.16.21.25:/export/brick1
Brick3: 172.16.21.26:/export/brick1
Brick4: 172.16.21.27:/export/brick1
Brick5: 172.16.21.28:/export/brick1
Brick6: 172.16.21.29:/export/brick1
Options Reconfigured:
performance.io-cache: off
server.statedump-path: /var/log/glusterfs/dumps/
performance.io-thread-count: 64
[root@ss01 ~]# 

How reproducible:
i don't know how to reproducible. 
it will reproducible when using in virtualized environments with openstack.

Steps to Reproduce:
1.
2.
3.

Actual results:
qemu-img info hang forever and the client used space not equal.

Expected results:


Additional info:
[root@nn06 ~]# ps aux | grep qemu-img
root      6861  0.0  0.0  17860   860 ?        D    11:27   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     18942  0.0  0.0  17860   864 ?        S    Jul21   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     32195  0.0  0.0  17860   860 ?        S    09:20   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     46746  0.0  0.0  17860   860 ?        D    10:07   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     47326  0.0  0.0  17860   860 ?        D    10:09   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     51980  0.0  0.0  17860   860 ?        D    10:24   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     52990  0.0  0.0  17860   860 ?        D    10:27   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     60021  0.0  0.0  17860   860 ?        D    10:50   0:00 qemu-img info /var/lib/nova/instances/private_dev_sdz/instance-0000048a/disk
root     60448  0.0  0.0 103252   880 pts/20   S+   14:24   0:00 grep qemu-img
You have mail in /var/spool/mail/root
[root@nn06 ~]# cat /proc/60021/stack 
[<ffffffffa02d3e55>] fuse_request_send+0xe5/0x290 [fuse]
[<ffffffffa02d92b6>] fuse_flush+0x106/0x140 [fuse]
[<ffffffff8117847c>] filp_close+0x3c/0x90
[<ffffffff81178575>] sys_close+0xa5/0x100
[<ffffffff8100b308>] tracesys+0xd9/0xde
[<ffffffffffffffff>] 0xffffffffffffffff
[root@nn06 ~]# df -h
文件系统	      容量  已用  可用 已用%% 挂载点
/dev/sda2              99G  9.4G   85G  11% /
tmpfs                 127G     0  127G   0% /dev/shm
/dev/sda1             194M   32M  153M  18% /boot
/dev/sda4             2.4T  223G  2.0T  10% /var/lib/nova/instances_local
172.16.21.29:/gv0      32T  520G   30T   2% /var/lib/nova/instances
172.16.21.29:/gv0      32T  522G   30T   2% /var/lib/cinder/volumes/aa94baff1dde36d5f9325a53bf564188
[root@nn06 ~]#

Comment 1 Niels de Vos 2015-05-17 22:01:03 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 2 Kaleb KEITHLEY 2015-10-07 12:10:06 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.