| Summary: | ls of fuse mount hangs on initial stat of directory | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Mike Robbert <mrobbert> |
| Component: | rdma | Assignee: | Raghavendra G <raghavendra> |
| Status: | CLOSED DUPLICATE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | low | ||
| Version: | 3.1.0 | CC: | gluster-bugs, jacob, lakshmipathi |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | fuse |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Mike Robbert
2010-10-25 14:11:09 UTC
I created a new volume using 3.1.0 GA RPMs. Installed one client and mounted using the native fuse client and then tried to ls the mount point and it hung and had to be killed with a -9. I then did the ls under strace and found that it was hanging on the initial stat of the directory.
open("/proc/mounts", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b376c0ef000
read(3, "rootfs / rootfs rw 0 0\n/dev/root"..., 4096) = 996
read(3, "", 4096) = 0
close(3) = 0
munmap(0x2b376c0ef000, 4096) = 0
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=56434112, ...}) = 0
mmap(NULL, 56434112, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b376c103000
close(3) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=53, ws_col=102, ws_xpixel=0, ws_ypixel=0}) = 0
stat("/mnt/gluster-test",
I looked at the logs on the servers and found these log messages:
[2010-10-25 10:52:14.330504] E [rdma.c:4370:rdma_event_handler] rpc-transport/rdma: rdma.management: p
ollin received on tcp socket (peer: 172.16.8.80:982) after handshake is complete
[2010-10-25 10:52:17.342965] E [rdma.c:4370:rdma_event_handler] rpc-transport/rdma: rdma.management: p
ollin received on tcp socket (peer: 172.16.8.80:978) after handshake is complete
[2010-10-25 10:52:20.354721] E [rdma.c:4370:rdma_event_handler] rpc-transport/rdma: rdma.management: p
ollin received on tcp socket (peer: 172.16.8.80:974) after handshake is complete
This message is being logged every 3 seconds on all servers.
Thanks,
Mike Robbert
(In reply to comment #1) > Unfortunately it returns an empty directory when > there is data on the bricks. Do you want yet another bug filed for this new > problem? > Did you create data on mount-point? client#cd /mnt/gluster-test client#mkdir temp_dir client#touch temp_dir/{1..1000}.txt client#ls temp_dir ls should show you 1000 empty files and these files should be distributed on your server directories. If you don't find any files with ls - please open a new bug ,attach the client/servers logs files. nevermind, please feel free to close this ticket. It appears that I jumped the gun on this and did not fully diagnose before opening. It turns out my mount was silently failing so I was doing an ls on an empty mount point. The reason that the mount was failing was that I had forgotten to START the volume after creating it. Thanks and sorry, Mike (In reply to comment #2) > (In reply to comment #1) > > Unfortunately it returns an empty directory when > > there is data on the bricks. Do you want yet another bug filed for this new > > problem? > > > Did you create data on mount-point? > > client#cd /mnt/gluster-test > client#mkdir temp_dir > client#touch temp_dir/{1..1000}.txt > client#ls temp_dir > > ls should show you 1000 empty files and these files should be distributed on > your server directories. > > If you don't find any files with ls - please open a new bug ,attach the > client/servers logs files. |