Bug 1854165
Summary: | gluster does not release posix lock when multiple glusterfs clients do flock -xo on the same file in parallel | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Cal Calhoun <ccalhoun> |
Component: | locks | Assignee: | Xavi Hernandez <jahernan> |
Status: | CLOSED ERRATA | QA Contact: | milind <mwaykole> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.5 | CC: | bkunal, jahernan, mwaykole, nchilaka, nravinas, pprakash, puebele, rhs-bugs, rkothiya, sheggodu, smulay |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.5.z Batch Update 3 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-6.0-40 | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-12-17 04:51:53 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Cal Calhoun
2020-07-06 15:47:17 UTC
Passing on the bug to Susant Palai who deals with the upstream bug. Steps: 1. create all types of volume 2. mount the brick on two different node 3.prepare same script to do flock on the two clients #!/bin/bash flock_func(){ #!/bin/bash file=/bricks/brick0/test.log touch $file ( flock -xo 200 echo "client1 do something" > $file sleep 1 ) 200>$file } i=1 while [ "1" = "1" ] do flock_func ((i=i+1)) echo $i if [[ $i == 200 ]]; then break fi done 4. waited till 300 iteration ------------------ Additional info [node.example.com]#rpm -qa | grep -i glusterfs glusterfs-6.0-45.el8rhgs.x86_64 glusterfs-fuse-6.0-45.el8rhgs.x86_64 glusterfs-api-6.0-45.el8rhgs.x86_64 glusterfs-selinux-1.0-1.el8rhgs.noarch glusterfs-client-xlators-6.0-45.el8rhgs.x86_64 glusterfs-server-6.0-45.el8rhgs.x86_64 glusterfs-cli-6.0-45.el8rhgs.x86_64 glusterfs-libs-6.0-45.el8rhgs.x86_64 As i don't see any issue while running script till 300 iterations marking this bug as verified *** Bug 1880271 has been marked as a duplicate of this bug. *** *** Bug 1852740 has been marked as a duplicate of this bug. *** *** Bug 1851315 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |