Description of problem: While testing a private build for BZ- https://bugzilla.redhat.com/show_bug.cgi?id=1562766 , hit into an issue where extraction of files present on NFS share was failing with below messages in ganesha.log 07/04/2018 04:43:10 : epoch 10430000 : dhcp37-121.lab.eng.blr.redhat.com : ganesha.nfsd-1789[work-65] glusterfs_lock_op2 :FSAL :CRIT :The requested lock length is out of range: lock_args.l_len(-1), request_lock_length(18446744073709551615) 07/04/2018 04:43:10 : epoch 10430000 : dhcp37-121.lab.eng.blr.redhat.com : ganesha.nfsd-1789[work-65] state_lock :STATE :MAJ :Unable to lock FSAL, error=STATE_BAD_RANGE AS per comment #9 on BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1562766#c9 , Opening this BZ to track this issue Version-Release number of selected component (if applicable): Private build- nfs-ganesha-gluster-2.5.5-4.el7rhgs.x86_64.rpm How reproducible: 2/2 Steps to Reproduce: 1.Create 4 node ganesha cluster 2.Create 4*3 Distributed-Replicate volume and export the volume via ganesha 3.Mount the volume on windows client 4.Mount the same volume on linux client and change permission of nfs share to chmod 777 5.Now copy the diskfill utility zip file on windows NFS mount point 6.Try extracting content of zip folder Actual results: Extracting of files failed on windows client Expected results: Files should be extracted on NFS share without any failures Additional info:
Did it actually fail to unzip, or just you saw the errors? Running on FSAL_VFS, I see the errors, but it still seemed to unzip ok.
It looks like Windows sends a lock length of UINT64_MAX. This results in a negative lock length inside the FSAL when converting to a struct flock where l_len is an off_t (int64_t). I have a patch set with three patches to address max file size and lock length issues. https://review.gerrithub.io/409091 Limit max file size to INT64_MAX https://review.gerrithub.io/409092 NLM: Adjust lock length if lock end would be > max file size https://review.gerrithub.io/409093 NFSv4: If lock offset plus length > max file size return NFS4ERR_BAD_RANGE When Kaleb is back from PTO, he can back port these and provide a test build to verify that this resolves all issues. Considering that the Windows client has not really be tested much, we may want to do tests on it upstream and make sure everything works upstream before we validate against downstream.
(In reply to Frank Filz from comment #2) > Did it actually fail to unzip, or just you saw the errors? > > Running on FSAL_VFS, I see the errors, but it still seemed to unzip ok. For me it actually failed.I was unable to proceed futher due to this failure
(In reply to Manisha Saini from comment #4) > (In reply to Frank Filz from comment #2) > > Did it actually fail to unzip, or just you saw the errors? > > > > Running on FSAL_VFS, I see the errors, but it still seemed to unzip ok. > > For me it actually failed.I was unable to proceed futher due to this failure Interesting, I wonder what the difference is. I just right clicked and selected extract all, but you might be using a different tool or there might be a difference between Windows 10 and the version of Windows you are using. In any case, the patches I have directly address the situation and should result in the FSAL not reporting ERR_FSAL_BAD_RANGE.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2610