Description of problem: If a service is specified with a given NFS export: /export and an authorized client mounts: /export/dir1 then the /var/lib/nfs/rmtab entry: client:/export/dir1:0x00000001 does not get synchronized across the cluster, causing ESTALE in the event of fail-over or manual relocation. Version-Release: 1.0.16-7 How reproducible: Always Steps to Reproduce: 1. Create a clumanager service with an NFS export "/export", available to world (*) with any mount options. 2. Create a directory "/export/dir1" 3. Mount the NFS mount "/export/dir1" on any authorized client. 4. Power-cycle the active cluster server. Actual results: Client which has /export/dir1 mounted receives ESTALE, even if it also has /export mounted (for which, it does NOT receive ESTALE!). Expected results: Seamless transition.
Have a fix for this and memory consumption. Ironing out bugs.
Testing mostly done. The build has these features: - Performance an order of magnitude faster (was testing merges/reads/syncs on your 13K-line file plus my 4K-line file). - Memory consumption (while using just the 13K-line file) is about 10% of what it was. - Subdirectories sync properly.
Fix in pool. Test away!
Wrong status field; hours updated.
We have been running with clumanager-1.0.19-1suny.i386.rpm in place for about 60 hours now, 4 NFS servers, 2000 clients, active/active with multiple exports per service, and this seems to work fine at this point. Service start/stop/relocate is fast, and no ESTALE on server failure or manual service relocate. Thanks!
An errata has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2002-314.html