Bug 254225 - clustered_log service stuck in update after mirror leg failure + node failure
clustered_log service stuck in update after mirror leg failure + node failure
Status: CLOSED WONTFIX
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: cmirror-kernel (Show other bugs)
4
All Linux
high Severity high
: ---
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-08-24 16:35 EDT by Corey Marthaler
Modified: 2010-10-28 11:05 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-10-28 11:05:53 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
messages and stack traces from link-02 (126.24 KB, text/plain)
2007-08-24 16:40 EDT, Corey Marthaler
no flags Details
messages and stack traces from link-07 (107.85 KB, text/plain)
2007-08-24 16:40 EDT, Corey Marthaler
no flags Details
debug info from link-02 (5.48 KB, text/plain)
2007-11-08 14:30 EST, Corey Marthaler
no flags Details
debug info from link-07 (5.13 KB, text/plain)
2007-11-08 14:32 EST, Corey Marthaler
no flags Details
debug info from link-08 (1.09 KB, text/plain)
2007-11-08 14:33 EST, Corey Marthaler
no flags Details
debug info from grant-01 (1.09 KB, text/plain)
2007-11-08 14:33 EST, Corey Marthaler
no flags Details
debug info from grant-02 (5.87 KB, text/plain)
2007-11-08 14:34 EST, Corey Marthaler
no flags Details
debug info from grant-03 (5.61 KB, text/plain)
2007-11-08 14:34 EST, Corey Marthaler
no flags Details
backtraces from link-02 (79.15 KB, text/plain)
2007-11-08 15:05 EST, Corey Marthaler
no flags Details
backtraces from link-07 (82.90 KB, text/plain)
2007-11-08 15:06 EST, Corey Marthaler
no flags Details
backtraces from grant-02 (104.37 KB, text/plain)
2007-11-08 15:07 EST, Corey Marthaler
no flags Details
backtraces from grant-03 (105.78 KB, text/plain)
2007-11-08 15:07 EST, Corey Marthaler
no flags Details
log and kern dump from taft-02 (151.23 KB, text/plain)
2008-07-01 14:47 EDT, Corey Marthaler
no flags Details
log and kern dump from taft-03 (254.85 KB, text/plain)
2008-07-01 14:48 EDT, Corey Marthaler
no flags Details
log and kern dump from taft-04 (151.20 KB, text/plain)
2008-07-01 14:49 EDT, Corey Marthaler
no flags Details

  None (edit)
Description Corey Marthaler 2007-08-24 16:35:20 EDT
Description of problem:
I was running helter_skelter on the x86_64 link-0[278] cluster. I'll try and
reproduce this to add more debugging info but here's what I've found so far:

Senario: Kill primary leg of synced 2 leg mirror

****** Mirror hash info for this scenario ******
* name:      fail_primary_synced_2_legs
* sync:      1
* disklog:   1
* failpv:    /dev/sdf1
* legs:      2
* pvs:       /dev/sdf1 /dev/sdb1 /dev/sde1
************************************************

Creating mirror on link-02...
qarsh root@link-02 lvcreate -m 1 -n fail_primary_synced_2_legs -L 800M
helter_skelter /dev/sdf1:0-500 /dev/sdb1:0-500 /dev/sde1:0-50
Creating gfs on top of mirror on link-02...
Creating mnt point /mnt/fail_primary_synced_2_legs on link-02...
Mounting gfs on link-02...
Creating mnt point /mnt/fail_primary_synced_2_legs on link-07...
Mounting gfs on link-07...
Creating mnt point /mnt/fail_primary_synced_2_legs on link-08...
Mounting gfs on link-08...

Waiting for mirror to sync
Verifying that the mirror is fully syncd, currently at
 ...18.00% ...28.00% ...38.50% ...48.50% ...58.50% ...68.50% ...78.50% ...88.50%
...99.00% ...100.00%

Disabling device sdf on link-02
Disabling device sdf on link-07
Disabling device sdf on link-08

Attempting I/O to cause mirror down conversion on link-02
10+0 records in
10+0 records out
[HERE IS WHERE I KILLED link-08]


[root@link-02 ~]# cman_tool services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           3   2 run       U-1,10,2
[1 3]

DLM Lock Space:  "clvmd"                           159  70 run       -
[3 1]

DLM Lock Space:  "clustered_log"                   622 271 update   
SU-10,201,012,18,1
[3]

DLM Lock Space:  "gfs"                             625 272 run       -
[3 1]

GFS Mount Group: "gfs"                             628 273 recover 2 -
[3 1]

User:            "usrm::manager"                     4   3 recover 0 -
[1 3]


# there is an error target on link-02
[root@link-02 ~]# dmsetup status
helter_skelter-fail_primary_synced_2_legs_mimage_1: 0 1638400 linear
helter_skelter-fail_primary_synced_2_legs_mimage_0: 0 1638400 error
helter_skelter-fail_primary_synced_2_legs_mlog: 0 8192 linear
helter_skelter-fail_primary_synced_2_legs:

# but not on link-07
[root@link-07 ~]# dmsetup status
helter_skelter-fail_primary_synced_2_legs_mimage_1: 0 1638400 linear
helter_skelter-fail_primary_synced_2_legs_mimage_0: 0 1638400 linear
helter_skelter-fail_primary_synced_2_legs_mlog: 0 8192 linear
helter_skelter-fail_primary_synced_2_legs:

[root@link-02 ~]# dmsetup ls --tree
helter_skelter-fail_primary_synced_2_legs_mimage_1 (253:4)
 └─ (8:17)
helter_skelter-fail_primary_synced_2_legs_mimage_0 (253:3)
helter_skelter-fail_primary_synced_2_legs_mlog (253:2)
 └─ (8:65)
helter_skelter-fail_primary_synced_2_legs (253:5)

[root@link-07 ~]# dmsetup ls --tree
helter_skelter-fail_primary_synced_2_legs_mimage_1 (253:4)
 └─ (8:17)
helter_skelter-fail_primary_synced_2_legs_mimage_0 (253:3)
 └─ (8:81)
helter_skelter-fail_primary_synced_2_legs_mlog (253:2)
 └─ (8:65)
helter_skelter-fail_primary_synced_2_legs (253:5)

[root@link-02 ~]# dmsetup info
Name:              helter_skelter-fail_primary_synced_2_legs_mimage_1
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 4
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSULZHqzS3R50MUltUg4QcwB6gzK5f5RwhZ

Name:              helter_skelter-fail_primary_synced_2_legs_mimage_0
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 3
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUJSp7Cm6wWPONTmUiA2TIeDloy40ZDKnk

Name:              helter_skelter-fail_primary_synced_2_legs_mlog
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUNgTMmwYGPWxoaJhCyU0nL0FoS8FyNBhR

Name:              helter_skelter-fail_primary_synced_2_legs
State:             SUSPENDED
Tables present:    None
Open count:        1
Event number:      9
Major, minor:      253, 5
Number of targets: 0
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUEWu7xI6qPX09cyiZ8ej5QcTXOLgXBWx9


[root@link-07 ~]# dmsetup info
Name:              helter_skelter-fail_primary_synced_2_legs_mimage_1
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 4
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSULZHqzS3R50MUltUg4QcwB6gzK5f5RwhZ

Name:              helter_skelter-fail_primary_synced_2_legs_mimage_0
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 3
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUJSp7Cm6wWPONTmUiA2TIeDloy40ZDKnk

Name:              helter_skelter-fail_primary_synced_2_legs_mlog
State:             ACTIVE
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUNgTMmwYGPWxoaJhCyU0nL0FoS8FyNBhR

Name:              helter_skelter-fail_primary_synced_2_legs
State:             SUSPENDED
Tables present:    None
Open count:        2
Event number:      0
Major, minor:      253, 5
Number of targets: 0
UUID: LVM-R2WDAdphRpK0ce1ceoFfe9E0jazorJSUEWu7xI6qPX09cyiZ8ej5QcTXOLgXBWx9



Version-Release number of selected component (if applicable):
2.6.9-56.ELsmp
cmirror-1.0.1-1
cmirror-kernel-2.6.9-33.2
lvm2-cluster-2.02.27-1.el4
Comment 1 Corey Marthaler 2007-08-24 16:40:07 EDT
Created attachment 172454 [details]
messages and stack traces from link-02
Comment 2 Corey Marthaler 2007-08-24 16:40:58 EDT
Created attachment 172456 [details]
messages and stack traces from link-07
Comment 3 Corey Marthaler 2007-08-24 16:44:39 EDT
[root@link-02 ~]# cat /proc/cluster/sm_debug
 2
01000271 recover state 2
0100009f recover state 3
01000271 cb recover state 2
01000271 recover state 3
0100009f recover state 4
01000271 recover state 4
0100009f recover state 5
01000271 recover state 5
02000274 recover state 0
02000274 recover state 1


[root@link-07 ~]# cat /proc/cluster/sm_debug
0100009f cb recover state 2
01000271 recover state 4
0100009f recover state 3
01000271 recover state 4
0100009f recover state 5
01000271 recover state 5
02000274 recover state 0
02000274 recover state 1
02000274 cb recover state 2
02000274 recover state 3

[root@link-02 ~]# cat /proc/cluster/dlm_locks
DLM lockspace 'clvmd'

Resource 000001002ff4b088 (parent 0000000000000000). Name (len=16)
"V_helter_skelter"
Master Copy
Granted Queue
00020255 PW 20496
Conversion Queue
Waiting Queue

Resource 0000010037a1ad58 (parent 0000000000000000). Name (len=64)
"R2WDAdphRpK0ce1ceoFfe9E0jazorJSUEWu7xI6qPX09cyiZ8ej5QcTXOLgXBWx9"
Local Copy, Master is node 1
Granted Queue
0005008f PW 25941 Master:     000201fe
Conversion Queue
Waiting Queue

Resource 000001002f979508 (parent 0000000000000000). Name (len=12) "V_VolGroup00"
Master Copy
Granted Queue
000203e9 PR 24848
Conversion Queue
Waiting Queue

Resource 0000010037a1ae88 (parent 0000000000000000). Name (len=64)
"R2WDAdphRpK0ce1ceoFfe9E0jazorJSUJSp7Cm6wWPONTmUiA2TIeDloy40ZDKnk"
Master Copy
Granted Queue
0002031a CR 20496
Conversion Queue
Waiting Queue


[root@link-07 ~]# cat /proc/cluster/dlm_locks
DLM lockspace 'clvmd'

Resource 000001003724de88 (parent 0000000000000000). Name (len=64)
"R2WDAdphRpK0ce1ceoFfe9E0jazorJSUEWu7xI6qPX09cyiZ8ej5QcTXOLgXBWx9"
Master Copy
Granted Queue
000201fe PW 25941 Remote:   3 0005008f
00030230 CR 27850
Conversion Queue
Waiting Queue


Comment 4 Jonathan Earl Brassow 2007-09-28 11:25:01 EDT
Please run again with latest build >= 9/28/07.

I believe this is fixed with the latest build, but I haven't tried this directly
yet, so need to confirm.  I'm gonna choose modified vs needinfo...

assigned -> modified
Comment 5 Corey Marthaler 2007-11-08 14:14:14 EST
Reproduced this bug without the mirror leg failure, just the machine recovery.
Had 6 nodes in a cluster (link-02, link-07, link-08, grant-01, grant-02,
grant-03) and killed two of them (link-08 and grant-01). Will post more info
shortly...
Comment 6 Corey Marthaler 2007-11-08 14:30:48 EST
Created attachment 251951 [details]
debug info from link-02
Comment 7 Corey Marthaler 2007-11-08 14:32:39 EST
Created attachment 251961 [details]
debug info from link-07
Comment 8 Corey Marthaler 2007-11-08 14:33:13 EST
Created attachment 251971 [details]
debug info from link-08
Comment 9 Corey Marthaler 2007-11-08 14:33:54 EST
Created attachment 251981 [details]
debug info from grant-01
Comment 10 Corey Marthaler 2007-11-08 14:34:19 EST
Created attachment 251991 [details]
debug info from grant-02
Comment 11 Corey Marthaler 2007-11-08 14:34:46 EST
Created attachment 252001 [details]
debug info from grant-03
Comment 12 Corey Marthaler 2007-11-08 15:05:35 EST
Created attachment 252031 [details]
backtraces from link-02
Comment 13 Corey Marthaler 2007-11-08 15:06:01 EST
Created attachment 252041 [details]
backtraces from link-07
Comment 14 Corey Marthaler 2007-11-08 15:07:03 EST
Created attachment 252051 [details]
backtraces from grant-02
Comment 15 Corey Marthaler 2007-11-08 15:07:30 EST
Created attachment 252061 [details]
backtraces from grant-03
Comment 16 Jonathan Earl Brassow 2008-04-01 11:36:31 EDT
same type of issue as:
239614, 362691, 437446*, 435341, 435491, 217895 ?
Comment 17 Corey Marthaler 2008-07-01 13:59:52 EDT
I appear to have hit this bug, or a very similar one to this. Again, this was
while failing a cluster node just after kill a device.

Senario: Kill primary leg of synced 2 leg mirror(s)

****** Mirror hash info for this scenario ******
* name:      syncd_primary_2legs
* sync:      1
* mirrors:   2
* disklog:   1
* failpv:    /dev/sde1
* legs:      2
* pvs:       /dev/sde1 /dev/sdh1 /dev/sdg1
************************************************

Creating mirror(s) on taft-02...
taft-02: lvcreate -m 1 -n syncd_primary_2legs_1 -L 800M helter_skelter
/dev/sde1:0-1000 /dev/sdh1:0-1
000 /dev/sdg1:0-150
taft-02: lvcreate -m 1 -n syncd_primary_2legs_2 -L 800M helter_skelter
/dev/sde1:0-1000 /dev/sdh1:0-1
000 /dev/sdg1:0-150

Waiting until all mirrors become fully syncd...
        0/2 mirror(s) are fully synced: ( 1=11.50% 2=0.00% )
        0/2 mirror(s) are fully synced: ( 1=32.50% 2=19.50% )
        0/2 mirror(s) are fully synced: ( 1=50.00% 2=37.00% )
        0/2 mirror(s) are fully synced: ( 1=69.00% 2=54.50% )
        0/2 mirror(s) are fully synced: ( 1=88.50% 2=73.00% )
        1/2 mirror(s) are fully synced: ( 1=100.00% 2=91.50% )
        2/2 mirror(s) are fully synced: ( 1=100.00% 2=100.00% )

Creating gfs on top of mirror(s) on taft-02...
Mounting mirrored gfs filesystems on taft-01...
Mounting mirrored gfs filesystems on taft-02...
Mounting mirrored gfs filesystems on taft-03...
Mounting mirrored gfs filesystems on taft-04...

Writing verification files (checkit) to mirror(s) on...
        ---- taft-01 ----
        ---- taft-02 ----
        ---- taft-03 ----
        ---- taft-04 ----

Sleeping 12 seconds to get some outsanding I/O locks before the failure

Disabling device sde on taft-01
Disabling device sde on taft-02
Disabling device sde on taft-03
Disabling device sde on taft-04

Attempting I/O to cause mirror down conversion(s) on taft-02
10+0 records in
10+0 records out
### KILLED TAFT-01 ###


[root@taft-02 ~]# cman_tool nodes
Node  Votes Exp Sts  Name
   1    1    4   X   taft-01
   2    1    4   M   taft-02
   3    1    4   M   taft-03
   4    1    4   M   taft-04
[root@taft-02 ~]# cman_tool services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           4   2 run       -
[2 4 3]

DLM Lock Space:  "clvmd"                             8   3 run       -
[2 4 3]

DLM Lock Space:  "clustered_log"                    16   8 run       S-10,200,0
[2 3 4]

DLM Lock Space:  "gfs1"                             17   9 run       -
[2 3 4]

DLM Lock Space:  "gfs2"                             19  11 run       -
[2 3 4]

GFS Mount Group: "gfs1"                             18  10 run       -
[2 3 4]

GFS Mount Group: "gfs2"                             20  12 recover 4 -
[2 3 4]


[root@taft-03 ~]# cman_tool nodes
Node  Votes Exp Sts  Name
   1    1    4   X   taft-01
   2    1    4   M   taft-02
   3    1    4   M   taft-03
   4    1    4   M   taft-04
[root@taft-03 ~]# cman_tool services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           4   2 run       -
[4 2 3]

DLM Lock Space:  "clvmd"                             8   3 run       -
[4 2 3]

DLM Lock Space:  "clustered_log"                    16   5 run       S-10,200,0
[2 3 4]

DLM Lock Space:  "gfs1"                             17   6 run       -
[2 3 4]

DLM Lock Space:  "gfs2"                             19   8 run       -
[2 3 4]

GFS Mount Group: "gfs1"                             18   7 run       -
[2 3 4]

GFS Mount Group: "gfs2"                             20   9 recover 4 -
[2 3 4]


[root@taft-04 ~]# cman_tool nodes
Node  Votes Exp Sts  Name
   1    1    4   X   taft-01
   2    1    4   M   taft-02
   3    1    4   M   taft-03
   4    1    4   M   taft-04
[root@taft-04 ~]# cman_tool services
Service          Name                              GID LID State     Code
Fence Domain:    "default"                           4   2 run       -
[4 2 3]

DLM Lock Space:  "clvmd"                             8   3 run       -
[4 2 3]

DLM Lock Space:  "clustered_log"                    16   8 run       S-10,200,0
[2 3 4]

DLM Lock Space:  "gfs1"                             17   9 run       -
[2 3 4]

DLM Lock Space:  "gfs2"                             19  11 run       -
[2 3 4]

GFS Mount Group: "gfs1"                             18  10 run       -
[2 3 4]

GFS Mount Group: "gfs2"                             20  12 recover 2 -
[2 3 4]


This was on:
2.6.9-71.ELsmp

lvm2-2.02.37-3.el4    BUILT: Thu Jun 12 10:09:19 CDT 2008
lvm2-cluster-2.02.37-3.el4    BUILT: Thu Jun 12 10:22:07 CDT 2008
device-mapper-1.02.25-2.el4    BUILT: Mon Jun  9 09:28:41 CDT 2008
cmirror-1.0.1-1    BUILT: Tue Jan 30 17:28:02 CST 2007
cmirror-kernel-2.6.9-41.4    BUILT: Tue Jun  3 13:54:29 CDT 2008
Comment 18 Corey Marthaler 2008-07-01 14:12:37 EDT
[root@taft-02 tmp]# dmsetup status
helter_skelter-syncd_primary_2legs_2:
helter_skelter-syncd_primary_2legs_1: 0 1638400 linear
helter_skelter-syncd_primary_2legs_2_mlog: 0 8192 linear
VolGroup00-LogVol01: 0 20447232 linear
helter_skelter-syncd_primary_2legs_2_mimage_1: 0 1638400 linear
VolGroup00-LogVol00: 0 122355712 linear
helter_skelter-syncd_primary_2legs_2_mimage_0: 0 1638400 linear

[root@taft-02 tmp]# dmsetup ls --tree
helter_skelter-syncd_primary_2legs_2 (253:9)
helter_skelter-syncd_primary_2legs_1 (253:5)
 └─ (8:113)
helter_skelter-syncd_primary_2legs_2_mlog (253:6)
 └─ (8:97)
VolGroup00-LogVol01 (253:1)
 └─ (8:2)
helter_skelter-syncd_primary_2legs_2_mimage_1 (253:8)
 └─ (8:113)
VolGroup00-LogVol00 (253:0)
 └─ (8:2)
helter_skelter-syncd_primary_2legs_2_mimage_0 (253:7)
 └─ (8:65)

root@taft-02 tmp]# dmsetup info
Name:              helter_skelter-syncd_primary_2legs_2
State:             SUSPENDED
Read Ahead:        256
Tables present:    None
Open count:        1
Event number:      107
Major, minor:      253, 9
Number of targets: 0
UUID: LVM-3ch9Xihjw0QZaZmo6GdgncXiQK1GGfXDc7QA2QF1KRx8idm0lAMswKYaF7s0SRbI

Name:              helter_skelter-syncd_primary_2legs_1
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      71
Major, minor:      253, 5
Number of targets: 1
UUID: LVM-3ch9Xihjw0QZaZmo6GdgncXiQK1GGfXD2EcXAI1lQPl367Rx2MSRtqs31sY4GEbC

Name:              helter_skelter-syncd_primary_2legs_2_mlog
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 6
Number of targets: 1
UUID: LVM-3ch9Xihjw0QZaZmo6GdgncXiQK1GGfXDFHuWUQA2h2mBe8QyrXSDLqBj65rpnvNP

Name:              VolGroup00-LogVol01
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: LVM-0aFTiqoLYX7dWJU63sScCNgaO7boq16XlvpcVPHdnYWO8lwcHAKZEeJjxI49e75R

Name:              helter_skelter-syncd_primary_2legs_2_mimage_1
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 8
Number of targets: 1
UUID: LVM-3ch9Xihjw0QZaZmo6GdgncXiQK1GGfXDqoHbm58YX0yqJ1sCqWInV0LS7XzWgQTW

Name:              VolGroup00-LogVol00
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: LVM-0aFTiqoLYX7dWJU63sScCNgaO7boq16XDS8P1Q22JxhHkAfgPaQUhfFwbeuN3QFA

Name:              helter_skelter-syncd_primary_2legs_2_mimage_0
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 7
Number of targets: 1
UUID: LVM-3ch9Xihjw0QZaZmo6GdgncXiQK1GGfXDJGDwHL0gPM2JTrKkv2LdHRDyJQuKDAjC

Comment 19 Corey Marthaler 2008-07-01 14:13:46 EDT
It's possible that this is related to bz 453478, however in this case lvm is
deadlocked due to the stuck recovery.
Comment 20 Corey Marthaler 2008-07-01 14:16:06 EDT
[root@taft-02 tmp]# cat /proc/cluster/sm_debug
02000014 recover state 2
02000012 recover state 1
02000012 cb recover state 2
02000014 recover state 2
02000012 recover state 3
02000014 cb recover state 2
02000014 recover state 3
02000012 recover state 4
02000014 recover state 4
02000012 recover state 5


[root@taft-03 ~]# cat /proc/cluster/sm_debug
02000014 recover state 2
02000012 recover state 1
02000014 cb recover state 2
02000014 recover state 3
02000012 recover state 2
02000012 cb recover state 2
02000014 recover state 4
02000012 recover state 3
02000014 recover state 4
02000012 recover state 5

[root@taft-04 ~]# cat /proc/cluster/sm_debug
 0
02000012 recover state 0
02000014 recover state 1
02000012 recover state 4
02000014 recover state 2
02000012 recover state 1
02000012 cb recover state 2
02000014 recover state 2
02000012 recover state 3
02000014 recover state 2
02000012 recover state 5



Comment 21 Corey Marthaler 2008-07-01 14:18:04 EDT
[root@taft-02 tmp]# cat /proc/cluster/dlm_stats
DLM stats (HZ=1000)

Lock operations:        679
Unlock operations:      315
Convert operations:      46
Completion ASTs:       1038
Blocking ASTs:           18

Lockqueue        num  waittime   ave
WAIT_RSB         498       501     1
WAIT_CONV         24        11     0
WAIT_GRANT       139       166     1
WAIT_UNLOCK       75        16     0
Total            736       694     0

[root@taft-03 ~]# cat /proc/cluster/dlm_stats
DLM stats (HZ=1000)

Lock operations:        566
Unlock operations:      199
Convert operations:      17
Completion ASTs:        782
Blocking ASTs:           14

Lockqueue        num  waittime   ave
WAIT_RSB         417       443     1
WAIT_CONV         10        41     4
WAIT_GRANT        99       160     1
WAIT_UNLOCK       45        10     0
Total            571       654     1


[root@taft-04 ~]# cat /proc/cluster/dlm_stats
DLM stats (HZ=1000)

Lock operations:        618
Unlock operations:      255
Convert operations:      31
Completion ASTs:        903
Blocking ASTs:           13

Lockqueue        num  waittime   ave
WAIT_RSB         471       319     0
WAIT_CONV          5         1     0
WAIT_GRANT       156       192     1
WAIT_UNLOCK      103        66     0
Total            735       578     0

[root@taft-02 tmp]# cat /proc/cluster/dlm_debug
s
gfs2 purge locks of departed nodes
gfs2 purged 1 locks
gfs2 update remastered resources
gfs1 mark waiting requests
gfs2 updated 25 resources
gfs1 mark 20241 lq 3 nodeid 1
gfs2 rebuild locks
gfs1 marked 1 requests
gfs1 purge locks of departed nodes
gfs2 rebuilt 0 locks
gfs2 recover event 47 done
gfs1 purged 1 locks
gfs1 update remastered resources
clvmd move flags 0,0,1 ids 31,47,47
clvmd process held requests
clvmd processed 0 requests
clvmd resend marked requests
clvmd resent 0 requests
clvmd recover event 47 finished
gfs1 updated 25 resources
gfs1 rebuild locks
gfs1 rebuilt 0 locks
gfs1 recover event 47 done
gfs2 move flags 0,0,1 ids 45,47,47
gfs2 process held requests
gfs2 processed 0 requests
gfs2 resend marked requests
gfs2 resent 0 requests
gfs2 recover event 47 finished
gfs1 move flags 0,0,1 ids 43,47,47
gfs1 process held requests
gfs1 processed 0 requests
gfs1 resend marked requests
gfs1 resend 20241 lq 3 flg 1200008 node 0/0 "       8
gfs1 resent 1 requests
gfs1 recover event 47 finished


[root@taft-03 ~]# cat /proc/cluster/dlm_debug
waiting requests
clvmd marked 0 requests
clvmd purge locks of departed nodes
clvmd purged 2 locks
clvmd update remastered resources
clvmd updated 0 resources
clvmd rebuild locks
clvmd rebuilt 0 locks
clvmd recover event 20 done
clvmd move flags 0,0,1 ids 9,20,20
clvmd process held requests
clvmd processed 0 requests
clvmd resend marked requests
clvmd resent 0 requests
clvmd recover event 20 finished
gfs2 rebuilt 26 locks
gfs2 recover event 20 done
gfs1 mark waiting requests
gfs1 marked 0 requests
gfs1 purge locks of departed nodes
gfs1 purged 1 locks
gfs1 update remastered resources
gfs2 move flags 0,0,1 ids 18,20,20
gfs2 process held requests
gfs2 processed 0 requests
gfs2 resend marked requests
gfs2 resent 0 requests
gfs2 recover event 20 finished
gfs1 updated 24 resources
gfs1 rebuild locks
gfs1 rebuilt 25 locks
gfs1 recover event 20 done
gfs1 move flags 0,0,1 ids 16,20,20
gfs1 process held requests
gfs1 processed 0 requests
gfs1 resend marked requests
gfs1 resent 0 requests
gfs1 recover event 20 finished


[root@taft-04 ~]# cat /proc/cluster/dlm_debug
rebuilt 0 locks
clvmd recover event 37 done
clvmd move flags 0,0,1 ids 31,37,37
clvmd process held requests
clvmd processed 0 requests
clvmd resend marked requests
clvmd resent 0 requests
clvmd recover event 37 finished
gfs2 mark waiting requests
gfs2 marked 0 requests
gfs2 purge locks of departed nodes
gfs2 purged 1 locks
gfs2 update remastered resources
gfs2 updated 26 resources
gfs2 rebuild locks
gfs1 mark waiting requests
gfs1 marked 0 requests
gfs1 purge locks of departed nodes
gfs1 purged 1 locks
gfs1 update remastered resources
gfs2 rebuilt 26 locks
gfs2 recover event 37 done
gfs2 move flags 0,0,1 ids 35,37,37
gfs2 process held requests
gfs2 processed 0 requests
gfs2 resend marked requests
gfs2 resent 0 requests
gfs2 recover event 37 finished
gfs1 updated 25 resources
gfs1 rebuild locks
gfs1 rebuilt 25 locks
gfs1 recover event 37 done
gfs1 move flags 0,0,1 ids 33,37,37
gfs1 process held requests
gfs1 processed 0 requests
gfs1 resend marked requests
gfs1 resent 0 requests
gfs1 recover event 37 finished

Comment 22 Corey Marthaler 2008-07-01 14:47:57 EDT
Created attachment 310702 [details]
log and kern dump from taft-02
Comment 23 Corey Marthaler 2008-07-01 14:48:30 EDT
Created attachment 310703 [details]
log and kern dump from taft-03
Comment 24 Corey Marthaler 2008-07-01 14:49:27 EDT
Created attachment 310704 [details]
log and kern dump from taft-04
Comment 29 RHEL Product and Program Management 2010-10-28 11:05:53 EDT
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request.

Note You need to log in before you can comment on or make changes to this bug.