Bug 1155584

Summary: quota reports connection refused error for NFS mount points without RPC quota service
Product: Red Hat Enterprise Linux 7 Reporter: M.T <tmaria>
Component: quotaAssignee: Petr Pisar <ppisar>
Status: CLOSED ERRATA QA Contact: Jan Ščotka <jscotka>
Severity: low Docs Contact: Milan Navratil <mnavrati>
Priority: unspecified    
Version: 7.0CC: jorton, jscotka, mnavrati, ovasik
Target Milestone: rcKeywords: FutureFeature, Patch
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
URL: https://sourceforge.net/p/linuxquota/patches/41/
Whiteboard:
Fixed In Version: quota-4.01-12.el7 Doc Type: Release Note
Doc Text:
*quota* now supports suppressing warnings about NFS mount points with unavailable *quota* RPC service If a user listed disk quotas with the *quota* tool, and the local system mounted a network file system with an NFS server that did not provide the *quota* RPC service, the *quota* tool returned the "error while getting quota from server" error message. Now, the *quota* tools can distinguish between unreachable NFS server and a reachable NFS server without the *quota* RPC service, and no error is reported in the second case.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-04 00:18:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1191019    
Attachments:
Description Flags
Proposed fix
none
Upstream's fix none

Description M.T 2014-10-22 12:39:05 UTC
Description of problem:

We have setup a new RedHat 7 server, which acts as an nfs client, and mounts several nfs directories from RedHat 6 nfs servers. 
User quotas has been enabled but not on all NFS servers.
Executing quota -u <username> from the nfs client, we get the output for the user, for his quotas, but for the filesystems that quota is off we get the error 
"quota: error while getting quota from server1.xx.yy.zz:/sys-data/newfs for user1 (id 1099): Connection refused
  

Version-Release number of selected component (if applicable):


How reproducible:
Set up NFS servers, and exports several filesystems, with quota enabled on few but not on all of them.
Mount the filesystems on nfs redhat 7 client

Steps to Reproduce:
1.execute quota -u <username>
2.
3.

Actual results:

quota: error while getting quota from server1.xx.yy.zz:/sys-data/newfs:/sys-data/newfs for user1(id 1099): Connection refused
quota: error while getting quota from server2.xx.yy.zz:/sys-data/WebData for user1 (id 1099): Connection refused
Disk quotas for user user1 (uid 1099):
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
server3.xx.yy.zz:/home/users
                      0       0       0               0       0       0
server3.xx.yy.zz:/mail
                3291744  10240000 15360000           26266       0       0



Expected results:
For filesystems that there is no quotas enabled, either should not displayed at all, or should display with 0 under quota limit columns.

Additional info:

Comment 2 Petr Pisar 2014-10-22 14:58:27 UTC
That's because quotas are enforced on the server. There is no "noquota" mount option for nfs mount types, therefore the client has no way how to know that the server does not support quotas. It has to query the server.

The "Connection refused" means there was an error when querying the server. The quota tools needs to report errors.

So the question is if the client can distinguish an error from non-existing quota RPC service.

Comment 3 M.T 2014-10-23 10:23:20 UTC
What if the nfs client, perform checks, on the mounted filesystem, to find if the aquota.user and aquota.group files exists. If there, means that quota have been enforced otherwise not.

Comment 4 Petr Pisar 2014-10-23 10:58:01 UTC
This is not an universal check. There are file systems which implements quotas without quota files.

Comment 5 Petr Pisar 2014-10-23 11:02:50 UTC
I see two possible solutions:

(1) Add noquota mount option to nfs type mounts.

(2) Recognize special state where server's RPC mapper is reachable, but the rquota service is not registered there. I don't know if there is a such capability in the rpc(3) API. But that wound still require a round trip to the server and back

Simple workaround is to run the rpc.rquotad daemon on all NFS servers.

Comment 6 Petr Pisar 2014-10-23 12:44:36 UTC
I proposed a patch to the quota tools' upstream. It works but I'm not sure whether it's a perfect solution.

Comment 7 Petr Pisar 2014-10-23 12:45:07 UTC
Created attachment 949864 [details]
Proposed fix

Comment 8 Petr Pisar 2014-11-26 09:42:26 UTC
Created attachment 961550 [details]
Upstream's fix

Upstream accepted the patch with some small tunes.

Comment 11 Petr Pisar 2016-03-04 08:29:27 UTC
How to test:

(1) Mount an NFS file system (The file system can have quotas disabled.)
(2) Make sure the quota RPC service (rpc-rquotad.service on RHEL-7) is not
    running on the server.
(2) Run "quota" command to list current user's quotas.
Before:
   A warning message is displayed on standard error output and the exit code
   is 0:
$ quota
quota: error while getting quota from 127.0.0.1:/mnt/1 for test (id 1000): Connection refused
After:
  No warning is printed.

To prevent from regressions, please make sure that if the quota RPC service is running and the exported files system has enabled quotas, the quota(1) tool will report quotas for the user.

Also you can check that if the quota RPC service is running and the exported files system has enabled quotas, but access to the RPC service is disabled (e.g. by a rule in the /etc/hosts.deny), then the quota(1) tool will still report this error state by the above quoted warning message.

Comment 15 errata-xmlrpc 2016-11-04 00:18:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2195.html