Bug 479587
Summary: | disturbing hacks to start services on demand | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Bill Nottingham <notting> | ||||
Component: | nfs-utils | Assignee: | Steve Dickson <steved> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 11 | CC: | apiemont, k.georgiou, michael.monreal, rvokal, steved | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2010-06-28 11:05:34 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Bill Nottingham
2009-01-11 19:23:46 UTC
Created attachment 328674 [details]
Changes required
A couple of things.... 1) statd needs to start to do lock recovery when a NFS server goes down. Not clear if that could be figured out from a startup script but in the end, lock recovery has to happen asap during the boot. 2) I guess I'm not totally against the idea of starting daemons on a need bases... but unfortunately the "evil hacks" that you are proposing are very distro specific which means upstream probably would not look too favourable on hem If you could have some context as to why these changes are needed it might be easier to sell the idea... 3) Note rpcsvcgssd is only started when the nfs server is started so I I think only idmapd and gssd would need to be started at mount time.. Oh, I agree they're ugly; it's meant as a starting point. What the issue is is that as soon as you install nfs-utils, you have nfslock, rpcgssd, rpcidmapd all trying to start at boot, no matter whether you're mounting any NFS filesystems, and no matter whether they're NFSv4 or not. I'm interested in trying to solve this better so that they only get started when they're actually needed. That was the idea for starting them all when the module is loaded/when mount is attempted. With respect to your specific comments: 1) This is still only needed if you had mounted a filesystem from the server that went down, correct? How can we better catch when this is needed? 2) For rpcsvc*, rpcidmapd, etc. - are these services registered with rpcbind? When rpcbind gets a request, should it attempt to start the service? (Yes, this is a gross hack that makes rpcbind Yet Another Service Daemon, like upstart, dbus system activation, etc.) > This is still only needed if you had mounted a filesystem from the server > that went down, correct? On the client side, yes. But on the server side, the server has to tell the client to recover its locks. This means server statd has "talk" with the client statd. > How can we better catch when this is needed? On the server side there is state left around that could be looked at via a boot script. On the client side, its stateless so there is not much to look at... > For rpcsvc*, rpcidmapd, etc. - are these services registered with rpcbind? No... way back when I missed named them... there is no communication between them and rpcbind. I wish I could go back and rename them... They need to exist to receive "upcalls" from the kernel. So I guess they could be started when the nfs modules is loaded... assuming that would be early enough. <thinking-out-loud> Is there any presidents on starting daemons when modules are loaded? I wonder what kind of race issues that would need to be adress... I probably would make the start up scripts a bit more difficult to debug. What happens when a daemon fails to start? Should the module not be loaded since, conceivable, that's the only way a daemon could be started... What about restartes, status and such... I would assume they would work as they do today.... </thinking-out-loud> Again, I do think this is a path worth investigating, but I must admit at this point, I am a bit skeptical... This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping This message is a reminder that Fedora 11 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 11. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '11'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 11's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 11 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping Fedora 11 changed to end-of-life (EOL) status on 2010-06-25. Fedora 11 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed. |