| Summary: | 'service clvmd stop' can take multple minutes when first attempted | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> | ||||
| Component: | lvm2 | Assignee: | Peter Rajnoha <prajnoha> | ||||
| Status: | CLOSED WORKSFORME | QA Contact: | Cluster QE <mspqa-list> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 6.2 | CC: | agk, coughlan, ddumas, dwysocha, heinzm, jbrassow, mcsontos, prajnoha, prockai, slevine, thornber, zkabelac | ||||
| Target Milestone: | rc | Keywords: | Regression | ||||
| Target Release: | --- | Flags: | cmarthal:
needinfo+
|
||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2012-12-11 08:08:36 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | |||||||
| Bug Blocks: | 756082, 840699 | ||||||
| Attachments: |
|
||||||
|
Description
Corey Marthaler
2011-10-07 22:25:38 UTC
[root@grant-01 ~]# time service clvmd stop
+ DAEMON=clvmd
+ exec_prefix=
+ sbindir=/sbin
+ lvm_vgchange=/sbin/vgchange
+ lvm_vgdisplay=/sbin/vgdisplay
+ lvm_vgscan=/sbin/vgscan
+ lvm_lvdisplay=/sbin/lvdisplay
+ CLVMDOPTS=-T30
+ '[' -f /etc/sysconfig/cluster ']'
+ '[' -f /etc/sysconfig/clvmd ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ CLVMD_STOP_TIMEOUT=10
+ LOCK_FILE=/var/lock/subsys/clvmd
+ '[' 0 '!=' 0 ']'
+ case "$1" in
+ stop
+ rh_status_q
+ rh_status
+ '[' -z '' ']'
++ clustered_vgs
++ /sbin/vgdisplay
++ awk 'BEGIN {RS="VG Name"} {if (/Clustered/) print $1;}'
# HERE'S WERE IT HANGS FOR THE MINUTE
+ LVM_VGS=
+ '[' -n '' ']'
++ pidofproc clvmd
++ local RC pid pid_file=
++ '[' 1 = 0 ']'
++ '[' clvmd = -p ']'
++ fail_code=3
++ __pids_var_run clvmd ''
++ local base=clvmd
++ local pid_file=/var/run/clvmd.pid
++ pid=
++ '[' -f /var/run/clvmd.pid ']'
++ local line p
++ '[' '!' -r /var/run/clvmd.pid ']'
++ :
++ read line
++ '[' -z 2200 ']'
++ for p in '$line'
++ '[' -z '' -a -d /proc/2200 ']'
++ pid=' 2200'
++ :
++ read line
++ '[' -z '' ']'
++ break
++ '[' -n ' 2200' ']'
++ return 0
++ RC=0
++ '[' -n ' 2200' ']'
++ echo 2200
++ return 0
+ action 'Signaling clvmd to exit' kill -TERM 2200
+ local STRING rc
+ STRING='Signaling clvmd to exit'
+ echo -n 'Signaling clvmd to exit '
Signaling clvmd to exit + shift
+ kill -TERM 2200
+ success 'Signaling clvmd to exit'
+ '[' color '!=' verbose -a -z '' ']'
+ echo_success
+ '[' color = color ']'
+ echo -en '\033[60G'
+ echo -n '['
[+ '[' color = color ']'
+ echo -en '\033[0;32m'
+ echo -n ' OK '
OK + '[' color = color ']'
+ echo -en '\033[0;39m'
+ echo -n ']'
]+ echo -ne '\r'
+ return 0
+ return 0
+ rc=0
+ echo
+ return 0
+ usleep 500000
+ rh_status_q
+ rh_status
+ rh_status_q
+ rh_status
+ echo -n 'clvmd terminated'
clvmd terminated+ success
+ '[' color '!=' verbose -a -z '' ']'
+ echo_success
+ '[' color = color ']'
+ echo -en '\033[60G'
+ echo -n '['
[+ '[' color = color ']'
+ echo -en '\033[0;32m'
+ echo -n ' OK '
OK + '[' color = color ']'
+ echo -en '\033[0;39m'
+ echo -n ']'
]+ echo -ne '\r'
+ return 0
+ return 0
+ echo
+ rm -f /var/lock/subsys/clvmd
+ return 0
+ rtrn=0
+ exit 0
real 1m3.164s
user 0m0.048s
sys 0m0.165s
(In reply to comment #1) > ++ /sbin/vgdisplay > ++ awk 'BEGIN {RS="VG Name"} {if (/Clustered/) print $1;}' > > # HERE'S WERE IT HANGS FOR THE MINUTE Some locking issue, I guess(?) So for starters, can you add -vvvv log of the vgdisplay? Corey, can you give a try with a vgdsiplay -vvvv so we have a log? Just a note that I have been able to reproduce this issue with the latest rpms. I'm still attempting to gather more verbose information on this however. 2.6.32-236.el6.x86_64 lvm2-2.02.94-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 lvm2-libs-2.02.94-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 lvm2-cluster-2.02.94-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 udev-147-2.40.el6 BUILT: Fri Sep 23 07:51:13 CDT 2011 device-mapper-1.02.73-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 device-mapper-libs-1.02.73-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 device-mapper-event-1.02.73-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 device-mapper-event-libs-1.02.73-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 cmirror-2.02.94-0.61.el6 BUILT: Thu Mar 1 07:03:29 CST 2012 Created attachment 567779 [details]
vgdisplay log from taft-03
The log from a node taking a really long time looks just like one that takes less then a second.
(In reply to comment #8) > The log from a node taking a really long time looks just like one that takes > less then a second. And on which line in log it waits for so long? (there should be really timestamps...) So maybe strace or lvm debug logging to syslog would be of more help then... (In reply to comment #10) > So maybe strace or lvm debug logging to syslog would be of more help then... (strace with timestamps!) Is this issue still reproducible with latest 6.4 build - the lvm2-2.02.98-3.el6? If yes, would it be possible to grab some debug output, possibly with timestamps? (comment #10 and comment #11, the strace with timestamps for the vgdisplay from comment #1). Closing, please, reopen if you hit this problem again. |