Bug 1763030
| Summary: | Set volume option when one of the node is powered off, After powering the node brick processes are offline (with brick-mux enabled) | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Bala Konda Reddy M <bmekala> | |
| Component: | glusterd | Assignee: | Srijan Sivakumar <ssivakum> | |
| Status: | CLOSED ERRATA | QA Contact: | milind <mwaykole> | |
| Severity: | high | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | rhgs-3.5 | CC: | moagrawa, pasik, pprakash, puebele, rhs-bugs, rkothiya, sheggodu, storage-qa-internal | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.5.z Batch Update 3 | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-6.0-38 | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1773856 (view as bug list) | Environment: | ||
| Last Closed: | 2020-12-17 04:50:17 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1773856, 1808964, 1808966 | |||
| Bug Blocks: | ||||
|
Description
Bala Konda Reddy M
2019-10-18 05:40:29 UTC
After These steps I don't see any brick going offline Steps: 1. Have a three node cluster, and create 25 replicate(1X3) volumes and start them 2. Mount any of the 5 volumes. 3. Power off one of the vm from the hypervisor. 4. Perform volume set operation on the 5 volumes that are mounted or other volumes, (performance.readdir-ahead on) 5. Volume option set will be successful. 6. Power on the node which is powered off -------------------- Additional info: [root ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 70000 cluster.max-op-version 70000 cluster.brick-multiplex on cluster.max-bricks-per-process 250 glusterd.vol_count_per_thread 100 cluster.daemon-log-level INFO ----------- glusterfs-6.0-45.el8rhgs.x86_64 glusterfs-fuse-6.0-45.el8rhgs.x86_64 glusterfs-api-6.0-45.el8rhgs.x86_64 glusterfs-selinux-1.0-1.el8rhgs.noarch glusterfs-client-xlators-6.0-45.el8rhgs.x86_64 glusterfs-server-6.0-45.el8rhgs.x86_64 glusterfs-cli-6.0-45.el8rhgs.x86_64 glusterfs-libs-6.0-45.el8rhgs.x86_64 Hence marking this bug as verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |