Bug 1421719
Summary: | volume stop generates error log | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED WORKSFORME | QA Contact: | |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | mainline | CC: | bugs, jdarcy |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | brick-multiplexing-testing | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-08-09 07:58:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Atin Mukherjee
2017-02-13 14:11:10 UTC
How reproducible: 1/1 Steps to Reproduce: 1.create a volume 2.stop the volume 3. Actual results: error messages are seen Expected results: no error messages should be seen Is this really *constant*? When I did this one one of my own machines, I got exactly two of these messages - one per brick. How many bricks was your test using? This is probably because the RPC connection was closed while the GLUSTERD_BRICK_TERMINATE requests themselves were still outstanding (see glusterfs_handle_terminate). This isn't seen in the non-multiplexing case because then we just send a SIGKILL instead of an RPC, but that's not an option here. Workarounds are likely to require significant rearranging of a control flow that has already proven quite fragile, which will be more disruptive than some log messages, so I would recommend against spending time on this until true release blockers have been fixed. I'm not seeing this any more in latest upstream mainline. |