Bug 1233069
Summary: | Direct map does not expire if map is initially empty | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Ian Kent <ikent> | ||||||
Component: | autofs | Assignee: | Ian Kent <ikent> | ||||||
Status: | CLOSED ERRATA | QA Contact: | xiaoli feng <xifeng> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | unspecified | ||||||||
Version: | 7.1 | CC: | eguan, ikent, madgupta | ||||||
Target Milestone: | rc | ||||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | autofs-5.0.7-52.el7 | Doc Type: | Bug Fix | ||||||
Doc Text: |
Cause: When autofs is started with an empty direct map the expire run frequency isn't known so an expire alarm isn't set. If the map is updated and is no longer empty and an autofs map re-loaded is done the expire alarm still isn't added.
Consequence: In this case direct mounts do not expire.
Fix: If the expire run frequency wasn't previously known add the expire alarm during the autofs map re-load.
Result: Direct mounts expire as expected.
|
Story Points: | --- | ||||||
Clone Of: | 1233057 | Environment: | |||||||
Last Closed: | 2015-11-19 13:01:03 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1233057, 1335358 | ||||||||
Bug Blocks: | |||||||||
Attachments: |
|
Description
Ian Kent
2015-06-18 07:37:39 UTC
Created attachment 1042096 [details]
Patch - fix direct map expire not set for initial empty map
Hi, Ian, Thanks for discovering and reporting this bug, and I think I understand the root cause and the purpose of your patch. Based on more code reading, I think maybe it's better for us to put the supplement of "alarm_add" as following: --- ../autofs-5.0.7/daemon/state.c 2015-01-23 17:10:12.740443598 +0800 +++ daemon/state.c 2015-07-01 15:48:26.235034830 +0800 @@ -488,6 +486,7 @@ status = lookup_ghost(ap, ap->path); } else { struct mapent *me; + int append_alarm = !ap->exp_runfreq; mnts = tree_make_mnt_tree(_PROC_MOUNTS, "/"); pthread_cleanup_push(tree_mnts_cleanup, mnts); @@ -516,6 +515,13 @@ map->stale = 0; map = map->next; } + + /* If the direct mount map was empty at startup no expire alarm will + * have been added. So add it here if there are now map entries. + */ + if (append_alarm && ap->exp_runfreq) + alarm_add(ap, ap->exp_runfreq + rand() % ap->exp_runfreq); + pthread_cleanup_pop(1); pthread_cleanup_pop(1); pthread_cleanup_pop(1); Because in this way, we use the final ap->exp_runfreq as the timeout. And this behavor for alarm_add is consistent with the handle_mounts function. I am not sure, just proposed it, you decide it. Another question, what's the supposed behavor for expire if the map files(direct and indirect) become empty and reload called? Thanks. This problem is introduced by the following patch, so it can only be reproduced since autofs-5.0.7+ commit f3dedc7bf56a8b226ee9bb7bade8d58954c0a7de Author: Leonardo Chiquitto <leonardo.lists> Date: Tue Jan 15 09:26:11 2013 +0800 autofs-5.0.7 - don't schedule new alarms after readmap (In reply to XuWang from comment #2) > Hi, Ian, > > Thanks for discovering and reporting this bug, and I think I understand the > root cause and the purpose of your patch. > > Based on more code reading, I think maybe it's better for us to put the > supplement of "alarm_add" as following: > > --- ../autofs-5.0.7/daemon/state.c 2015-01-23 17:10:12.740443598 +0800 > +++ daemon/state.c 2015-07-01 15:48:26.235034830 +0800 > @@ -488,6 +486,7 @@ > status = lookup_ghost(ap, ap->path); > } else { > struct mapent *me; > + int append_alarm = !ap->exp_runfreq; > > mnts = tree_make_mnt_tree(_PROC_MOUNTS, "/"); > pthread_cleanup_push(tree_mnts_cleanup, mnts); > @@ -516,6 +515,13 @@ > map->stale = 0; > map = map->next; > } > + > + /* If the direct mount map was empty at startup no expire alarm will > + * have been added. So add it here if there are now map entries. > + */ > + if (append_alarm && ap->exp_runfreq) > + alarm_add(ap, ap->exp_runfreq + rand() % ap->exp_runfreq); > + > pthread_cleanup_pop(1); > pthread_cleanup_pop(1); > pthread_cleanup_pop(1); > > Because in this way, we use the final ap->exp_runfreq as the timeout. And > this behavor for alarm_add is consistent with the handle_mounts function. > I am not sure, just proposed it, you decide it. I like this patch better than mine, I'll use it. It has lower overhead because my change would cause an additional comparison to be done for every direct mount map entry whereas this will do a single comparison at the end. > > > Another question, what's the supposed behavor for expire if the map > files(direct and indirect) become empty and reload called? The alarm will continue to trigger. That was the expected behaviour before commit f3dedc7b. An improvement could be made to remove the alarm when the map becomes empty but isn't a high priority since maps usually aren't empty and the alarm overhead is small. Ian (In reply to XuWang from comment #3) > This problem is introduced by the following patch, so it can only be > reproduced since autofs-5.0.7+ > > commit f3dedc7bf56a8b226ee9bb7bade8d58954c0a7de > Author: Leonardo Chiquitto <leonardo.lists> > Date: Tue Jan 15 09:26:11 2013 +0800 > > autofs-5.0.7 - don't schedule new alarms after readmap Yes, it looks like that at first sight but alarm_add(ap, 0) won't add an alarm (have a look at lib/alarm.c:alarm_add()). This patch isn't in RHEL-6 autofs but the problem is still present because of it. In fact that's were I first saw it. Ian Created attachment 1045341 [details]
Patch - fix direct map expire not set for initial empty map (updated)
run test case /autofs/sanity/reload for autofs-5.0.7-53.el7, seems good. autofs previous version: :: [ BEGIN ] :: Running 'tests/expire.sh' posix on Redirecting to /bin/systemctl stop autofs.service Redirecting to /bin/systemctl restart autofs.service :: [ BEGIN ] :: Running 'check_indirect_reload' Redirecting to /bin/systemctl reload autofs.service :: [ PASS ] :: Command 'check_indirect_reload' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_direct_reload' Redirecting to /bin/systemctl reload autofs.service :: [ PASS ] :: Command 'check_direct_reload' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_already_umount /indirect/plus' :: [ PASS ] :: Command 'check_already_umount /indirect/plus' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_already_umount /direct/plus' :: [ FAIL ] :: Command 'check_already_umount /direct/plus' (Expected 0, got 1) Redirecting to /bin/systemctl stop autofs.service :: [ PASS ] :: Command 'tests/expire.sh' (Expected 0, got 0) autofs-5.0.7-53.el7: [04:22:07 root@ ~~]# tests/expire.sh posix on Redirecting to /bin/systemctl stop autofs.service Redirecting to /bin/systemctl restart autofs.service :: [ BEGIN ] :: Running 'check_indirect_reload' Redirecting to /bin/systemctl reload autofs.service :: [ PASS ] :: Command 'check_indirect_reload' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_direct_reload' Redirecting to /bin/systemctl reload autofs.service :: [ PASS ] :: Command 'check_direct_reload' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_already_umount /indirect/plus' :: [ PASS ] :: Command 'check_already_umount /indirect/plus' (Expected 0, got 0) :: [ BEGIN ] :: Running 'check_already_umount /direct/plus' :: [ PASS ] :: Command 'check_already_umount /direct/plus' (Expected 0, got 0) Redirecting to /bin/systemctl stop autofs.service :: [ PASS ] :: Running 'tests/expire.sh' (Expected 0, got 0) ------------------------------------------------------------------------------- also run regression, connectathon, stress, bugzillas, sanity test cases for autofs-5.0.7-53.el7, covers x86_64, s390x, ppc64, ppc64le, no new issues. so I will change this bug status to be verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-2417.html |