From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.10) Gecko/20050719 Red Hat/1.0.6-1.4.1 Firefox/1.0.6 Description of problem: Dynamic allocation of minor numbers is default for lvm2. These device numbers are not reliable for deriving fsid when exporting such a device since they may change when the server is rebooted. Using default settings for lvm2 and default settings for exportfs may cause messed up mounts and running client applications may write to the wrong file system after server reboot. Version-Release number of selected component (if applicable): nfs-utils-1.0.6-46 How reproducible: Always Steps to Reproduce: 1. See bug 166750 Actual Results: Clients mount wrong filesystems after server reboot. Expected Results: A warning should be written to the messages file when an lvm2 volume with dynamic device numbers is exported with default fsid. Additional info: If it is too complicated to implement the warning, at least the man pages could me more clear on what settings are required when exporting lvm2 volumes.
I just had a disaster today due to this issue with over 300 clients getting stale NFS mounts after a machine using LVM2 which had new volumes added since the last reboot was rebooted and all the volumes device numbers changed. On emailing the LVM2 list I was told one should use the fsid option in /etc/exports. Just tried this and it does not work. I have kernel-2.6.9-22 nfs-utils-1.0.6-65. I had seven exported LVM volumes on the system. I used 'stat' to get the volume id for each. I then added fsid=##### to /etc/exports foreach volume to match that id. I then rebooted the system. After reboot I could see two of the volumes changed their underlying volume id. These two became STALE on each client despite the fsid setting making the id what it was before. THe others still worked fine. So this lead me to believe that the fsid option was just totally ignored. But I created a test volume and used the fsid option and can make a mount of it go stale by changing the fsid and reloading nfs on the server. But something is obviously wrong with the execution of the fsid option. So for RHEL4 the only way to fix this is to use the -My option for lvchange to make the major and minor numbers persistent. Actually, that is broken too in that any major number given is ignored which is a none problem in the 2.6 kernel. But setting just the minor will work as long as you have less than 256 volumes. Unfortunately lvchange cannot be done live.
I think you may be misunderstanding what the fsid= export option is supposed to do. It's not meant to be changed "live" either. Adding fsid= to the export will change the fsid with which it's exported. This will generally mean that filehandles cached on the client will suddenly go stale. Adding that export option should generally be done between unmounting the filesystem from the clients and then mounting them back. After that, the fsid should be persistent. exportfs has no way to know if a volume that it's exporting has a persistent device number or not, so I don't think we can do what you're requesting. Note that in more recent kernels (post 2.6.20) exportfs uses the UUID of the filesystem if it's available, which should be independent of the device number. So this should hopefully be a non-issue in RHEL6 and beyond. I'm going to close this as NOTABUG.