Bug 1415178
Summary: | systemic testing: seeing lot of ping time outs which would lead to splitbrains | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | ||||||
Component: | rpc | Assignee: | Raghavendra G <rgowdapp> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | rhgs-3.2 | CC: | amukherj, asrivast, kdhananj, mchangir, moagrawa, nchilaka, olim, rabhat, rcyriac, rgowdapp, rhinduja, rhs-bugs, sanandpa, ssaha | ||||||
Target Milestone: | --- | ||||||||
Target Release: | RHGS 3.3.0 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | glusterfs-3.8.4-26 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1421938 (view as bug list) | Environment: | |||||||
Last Closed: | 2017-09-21 04:30:55 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1421938, 1427387, 1427390 | ||||||||
Bug Blocks: | 1417147 | ||||||||
Attachments: |
|
Description
Nag Pavan Chilakam
2017-01-20 13:31:47 UTC
I've some queries: * what was the disk utilization during testing? * Can you please get profile info of bricks? I need to see what operations are being performed and their frequency. * Is it possible to get strace of brick process when tests are being run? You can use following cmdline: strace -ff -T -p <pid-of-brick> -o <path-where-you-want-the-strace-output-saved> Following is a possible hypothesis <hypothesis> server not able to read ping request. Me, Kritika and Pranith were about an issue where they had hit ping timer expiry in replicated setups when disk usage was high. The reason for this as Pranith pointed out was, 1. posix has some fops (like posix_xattrop, posix_fxattrop) which do syscalls after holding a lock on inode (inode->lock). 2. During high disk usage scenarios, syscall latencies were high (sometimes >= ping-timeout value) 3. Before being handed over to a new thread at io-threads xlator, a fop gets executed in one of the threads that reads incoming messages from socket. This execution path includes some translators like protocol/server, index, quota-enforcer, marker. And these translators might access inode-ctx which involves locking inode (inode->lock). Due to this locking latency of syscall gets transferred to poller thread. Since poller thread is waiting on inode->lock, it won't be able to read ping requests from network in-time resulting in ping-timer expiry. </hypothesis> Created attachment 1243534 [details]
profile output
Created attachment 1243893 [details]
profile log while strace being taken
I went through strace output and volume profile. Though volume profile shows large max latencies for fops like xattrop/fxattrop, syscall latencies shown by strace is not too much to cause a ping timeout. As to large xattrop/fxattrop latency, the fops are serialized and when there too many (f)xattrops on the same file, the last fop can have a latency which is cumulative of all the (f)xattrops in the waiting queue before it. So, its not really an issue. Nag also informed me that the disk utilization was not high when ping-timeout happened. So, I doubt whether its high disk utilization that caused this ping-timeout. ATM, I don't have an RCA, but we can experiment with two things: 1. Increase the poller threads to 8 on bricks 2. Use fix [1] Note that both the above two steps need to be done before starting tests. [1] https://review.gluster.org/15046 Hi, Can you please confirm about reproducer steps? As per doc It seems below are the steps 1) Run the script to create deep directory structure (5 Level) from two different client 2) Run untar/tar linux.tar command operation 50 times in directory Regards Mohit Agrawal Hi, I tried to reproduce the issue as step mentioned in previous comment but i did not get success. I have checked on the setup (server 10.70.35.20), vm is accesible but it seems all resources(cpu,memory) are consumed and even i am not able to run any linux command on vm. On the client side below message are continuously coming in logs [2017-02-02 05:26:23.661141] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-systemic-client-4: changing port to 49153 (from 0) [2017-02-02 05:26:23.667104] E [socket.c:2309:socket_connect_finish] 0-systemic-client-4: connection to 10.70.35.20:49153 failed (Connection refused) Ping timeout issue could be happen if brick process is not able to respond within time period (42 second) and brick is not able to respond because sufficient resources are not available on vm. Regards Mohit Agrawal i had shared the setup. hence removing needinfo I have seen this on 3.8.4-14 also, where parallel lookups from 2 clients are being issued. Note that i ran a small script which echos date every second on all the 4 servers to see if there is any VM pause, but I didn't find any issue there as the echo worked perfectly fine for 2 days, while the clients did see disconnects logs avaialble at http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/nchilaka/3.2_logs/systemic_testing_logs/regression_cycle/final-reg-round/client-facing-brick-disconnects/ refer to clients rhs-client12 and rhs-client26 upstream patch : https://review.gluster.org/16785 I have tested on 3.3 ie 3.8.4-37 for about 1 week and didn't see any ping timeouts. I ran IOs from multiple clients(similar to what was run while bz was raised, but not exactly) Hence moving to verified. If i hit it again, i may have to reopen. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |