Bug 1698919
| Summary: | Brick is not able to detach successfully in brick_mux environment | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Mohit Agrawal <moagrawa> | |
| Component: | core | Assignee: | Mohit Agrawal <moagrawa> | |
| Status: | CLOSED ERRATA | QA Contact: | Upasana <ubansal> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | rhgs-3.5 | CC: | nchilaka, rhinduja, rhs-bugs, sheggodu, storage-qa-internal | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.5.0 | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-6.0-2 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1699023 1699025 1699714 (view as bug list) | Environment: | ||
| Last Closed: | 2019-10-30 12:20:51 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1696807, 1699023, 1699025, 1699714 | |||
Upstream patch https://review.gluster.org/#/c/glusterfs/+/22549/ Downstream patch to resolve the same https://code.engineering.redhat.com/gerrit/#/c/167826/ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249 |
Description of problem: Brick is not detached successfully while brick_mux is enabled. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Setup 3 node cluster environment 2. Enable brick_mux 3. Run below loop to setup 50 volumes for i in {1..50}; do gluster v create testvol$i replica 3 <node1>:/home/testvol/b$i <node2>:/home/testvol/b$i <node3>:/home/testvol/b$i force; gluster v start testvol$i;done 4. Run below loop to stop volume for i in {2..50}; do gluster v stop testvol$i --mode=script; sleep 1; ;done 5. After run above loop check brick in running process ls -lrth /proc/`pgrep glusterfsd`/fd | grep b | grep -v .glusterfs the command is showing multiple bricks are still part of the running process. Actual results: Bricks are not detached successfully. Expected results: Bricks should be detached successfully. Additional info: