Bug 454052 - plock rate limit fails to reenable locking
Summary: plock rate limit fails to reenable locking
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: cman
Version: 10
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: David Teigland
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2008-07-04 05:48 UTC by Adrian A. Sender
Modified: 2009-11-18 15:51 UTC (History)
11 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2009-11-18 15:51:14 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
ping_pong.c - use this to test all future releases of cman (3.34 KB, text/plain)
2008-07-04 05:48 UTC, Adrian A. Sender
no flags Details

Description Adrian A. Sender 2008-07-04 05:48:56 UTC
Please find attached ping_ping.c test. This has been ran again a 2node cman gfs2
cluster using DRBD. When ran on a single node we only acheive 70 locks per a
second. This should be around the 100,000 mark. When ran on both nodes the
cluster dies. 

gcc -o ping_ping ping_ping.c 

./ping_ping 
ping_pong [options] <file> <num_locks>
           -r    do reads
           -w    do writes
           -m    use mmap


Core-01: ./ping_ping -rw /gfs2-00/test_ping_ping 2
Core-02: ./ping_ping -rw /gfs2-00/test_ping_ping 2

ps aux | grep ping_pong

root      3758  0.0  0.0   3880   408 pts/1    D+   23:26

In future please use this test against all future releases of CMAN and ensure it
passes & does not crash the cluster.


If there is a temporary network interruption on the device that cman
communicates over it will die.  The network is interrupted for less then 30 seconds.

Jul  3 00:59:34 core-02 kernel: r8169: eth0: link down
Jul  3 00:59:34 core-02 NetworkManager: <info>  (eth0): carrier
now OFF (device state 1)
Jul  3 00:59:36 core-02 kernel: r8169: eth0: link up
ul  3 00:59:36 core-02 NetworkManager: <info>  (eth0): carrier
now ON (device state 1)
ul  3 00:59:41 core-02 kernel: eth3: link down
Jul  3 00:59:41 core-02 NetworkManager: <info>  (eth3): carrier
now OFF (device state 1)
Jul  3 00:59:42 core-02 kernel: eth3: link up, 100Mbps,
full-duplex, lpa 0xC1E1
Jul  3 00:59:42 core-02 NetworkManager: <info>  (eth3): carrier
now ON (device state 1)
 Jul  3 00:59:48 core-02 kernel: dlm: closing connection to node
1
Jul  3 00:59:48 core-02 fenced[2632]: core-01 not a cluster
member after 0 sec post_fail_delay
Jul  3 00:59:48 core-02 fenced[2632]: fencing node "core-01"
jul  3 00:59:48 core-02 fenced[2632]: fence "core-01" failed
Jul  3 00:59:53 core-02 fenced[2632]: fencing node "core-01"
==========

Some additional Logs

Jul  3 20:32:36 core-02 ccsd[26667]: Unable to connect
to cluster infrastructure after 30 seconds.
Jul  3 20:33:04 core-02 ccsd[26667]: Initial status::
Quorate
Jul  3 20:33:05 core-02 groupd[26792]: found
uncontrolled kernel object gfs2-00 in /sys/kernel/dlm
Jul  3 20:33:05 core-02 groupd[26792]: found
uncontrolled kernel object gfs2-01 in /sys/kernel/dlm
Jul  3 20:33:05 core-02 groupd[26792]: found
uncontrolled kernel object hardcore:gfs2-00 in /sys/fs/gfs2
Jul  3 20:33:05 core-02 groupd[26792]: found
uncontrolled kernel object hardcore:gfs2-01 in /sys/fs/gfs2
 Jul  3 20:33:05 core-02 groupd[26792]: local node must
be reset to clear 4 uncontrolled instances of gfs and/or dlm
 Jul  3 20:33:05 core-02 openais[26780]: cman killed by
node 2 because we were killed by cman_tool or other application
Jul  3 20:33:05 core-02 fence_node[26793]: Fence of
"core-02" was unsuccessful

When tested with CTDB there are locking issues and cluster becomes unresponsive.

root      3235  0.0  0.0   8416   488 ?        D    22:49
0:00 ctdbd –reclock=/gfs2-00/ctdb/ctdb_recovery_lock

<sendro> INFO: task gfs2_quotad:3105 blocked for more than 120 seconds.
<sendro> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
<sendro> gfs2_quotad   D ffff81006f0c3d40     0  3105      2
<sendro>  ffff81006f0c3cd0 0000000000000046 0000000000000000
ffffffff8112f2cd
<sendro>  0000000000000000 ffffffff814b4e00 ffffffff814b4e00
ffffffff883a373e
<sendro>  ffff81006f0c3c40 ffff81006f0c0000 ffffffff813b9610
ffff81006f0c0328
<sendro> Call Trace:
<sendro>  [<ffffffff8112f2cd>] ? __up_read+0x7a/0x85
<sendro>  [<ffffffff883a373e>] ? :dlm:dlm_put_lockspace+0x18/0x25
<sendro>  [<ffffffff883c2738>] :gfs2:just_schedule+0x9/0xd
<sendro>  [<ffffffff8128d76d>] __wait_on_bit+0x47/0x79
<sendro>  [<ffffffff883c272f>] ? :gfs2:just_schedule+0x0/0xd
<sendro>  [<ffffffff883c272f>] ? :gfs2:just_schedule+0x0/0xd
<sendro>  [<ffffffff8128d809>] out_of_line_wait_on_bit+0x6a/0x77
<sendro>  [<ffffffff81046b43>] ? wake_bit_function+0x0/0x2a
<sendro>  [<ffffffff883c70ef>] ? :gfs2:gfs2_lm_lock+0x23/0x25
<sendro>  [<ffffffff883c272a>] :gfs2:wait_on_holder+0x42/0x47
<sendro>  [<ffffffff883c3b95>] :gfs2:glock_wait_internal+0x105/0x261
<sendro>  [<ffffffff883c3ec4>] :gfs2:gfs2_glock_nq+0x1d3/0x1fc
<sendro>  [<ffffffff883d7630>] :gfs2:gfs2_statfs_sync+0x47/0x21d
<sendro>  [<ffffffff8103d027>] ? del_timer_sync+0x14/0x21
<sendro>  [<ffffffff883d7628>] ? :gfs2:gfs2_statfs_sync+0x3f/0x21d
<sendro>  [<ffffffff8128d5b9>] ? schedule_timeout+0x88/0xb4
<sendro>  [<ffffffff883bc278>] :gfs2:gfs2_quotad+0x5c/0x156
<sendro>  [<ffffffff883bc21c>] ? :gfs2:gfs2_quotad+0x0/0x156
<sendro>  [<ffffffff810467eb>] kthread+0x49/0x76
<sendro>  [<ffffffff8100ccf8>] child_rip+0xa/0x12
<sendro>  [<ffffffff810467a2>] ? kthread+0x0/0x76
<sendro>  [<ffffffff8100ccee>] ? child_rip+0x0/0x12
<sendro> -------------

Comment 1 Adrian A. Sender 2008-07-04 05:48:57 UTC
Created attachment 310999 [details]
ping_pong.c - use this to test all future releases of cman

Comment 2 Steve Whitehouse 2008-07-04 13:42:52 UTC
Its not obvious to me right away what has caused the original issue. The message
from quotad is just a consequence of locking being broken I think.

Comment 3 Christine Caulfield 2008-07-07 08:25:11 UTC
> If there is a temporary network interruption on the device that cman
> communicates over it will die.

This is correct. How long cman will tolerate a network outage for depends on the
openais tuning settings. If you are expecting outages of 9 seconds or longer
then you should tune the <totem token = "xxx"/> setting in cluster.conf.

Because cman depends on the network for heartbeat and communications, if you
remove that communication channel the other nodes will kill it if communications
to it time out.

Comment 4 David Teigland 2008-07-07 15:36:50 UTC
ping_pong works great on gfs, you should be able to find some information
about that in the samba-technical archives.

DRDB is going to cause you a bunch of problems, I expect.  


Comment 5 Steve Whitehouse 2008-07-31 13:13:13 UTC
Adrian, please can you verify whether you hit the same problems without DRBD?
Then we can look at whatever might still be an issue.


Comment 6 Adrian A. Sender 2008-08-01 07:31:10 UTC
I mounted the underlying drbd device and tested ping_pong.. I am still getting
94 locks a second which was the same with the drbd device mounted. 

Without a SAN / drbd I cannot test gfs2 in a clustered filesystem environment
however; I mounted both underlaying drbd devices on both nodes anyway and tested
the ping_pong script on both nodes.. at this stage both ping_pong test go into
dsate.

[root@core-01 ~]# ps aux | grep ping
root      3321  0.0  0.0   3880   408 pts/0    D+   17:21   0:00 ./ping_ping -rw
/gfs2-00/test_ping_ping 2

[root@core-02 ~]# ps aux | grep ping
root      3755  0.0  0.0   3884   488 pts/0    D+   17:17   0:00 ./ping_ping -w
/gfs2-00/test_ping_ping 2

This is the same result as when using DRBD; however cman does not die.

Node  Sts   Inc   Joined               Name
   1   M  363596   2008-08-01 17:10:19  core-01
   2   M  363600   2008-08-01 17:10:20  core-02


On one node
-----------
/dev/sdb1            488316832 346811112 141505720  72% /gfs2-00

[root@core-02 ~]# ./ping_ping -w /gfs2-00/test_ping_ping 2
      94 locks/sec

Regards,

Adrian Sender

Comment 7 Steve Whitehouse 2008-08-07 10:27:10 UTC
Dave, any ideas what the problem is here? Adrian, what version of the gfs2-utils and cman package are you using? Maybe its too old to have the plock performance patch in.

Comment 8 David Teigland 2008-08-07 15:07:48 UTC
The whole setup looks suspect to me.  I still think you need to
set up a proper cluster with proper shared storage (no drdb),
and released/validated versions of the cluster code (RHEL5.2+ or
cluster-2.03.06+), and gfs1.  Until then, there are too many
variables to suggest anything specific.  After you have that
cleared up, then make sure you have the following in your
cluster.conf for maximum plock performance:
  <gfs_controld plock_ownership="1" plock_rate_limit="0"/>

Comment 9 Adrian A. Sender 2008-08-08 02:27:42 UTC
I have applied all latest updates in Fedora 9.

[root@core-01 ~]# ./ping_ping -rw /gfs2-00/test_ping_ping 2
data increment = 1
      93 locks/sec
      95 locks/sec

[root@core-02 ~]# ./ping_ping -w /gfs2-00/test_ping_ping 2


Cman does not die this time; however both processes go into dstate still.

[root@core-01 gfs2-00]# ps aux | grep ping
root     12867  0.0  0.0   3880   520 pts/0    D+   12:18   0:00 ./ping_ping -rw /gfs2-00/test_ping_ping 2

cman-2.03.05-1.fc9.x86_64
gfs2-utils-2.03.05-1.fc9.x86_64
drbd-8.2.6-3.x86_64
drbd-km-2.6.25.11_97.fc9.x86_64-8.2.6-3.x86_64

How can I do additional debugging to help? Although I partially agree with what you say Dave (use shared storage SAN); I still think this issue should not be dismissed.

Comment 10 Fabio Massimo Di Nitto 2008-08-08 04:54:57 UTC
(In reply to comment #9)
> I have applied all latest updates in Fedora 9.
> 
> [root@core-01 ~]# ./ping_ping -rw /gfs2-00/test_ping_ping 2
> data increment = 1
>       93 locks/sec
>       95 locks/sec
> 
> [root@core-02 ~]# ./ping_ping -w /gfs2-00/test_ping_ping 2
> 
> 
> Cman does not die this time; however both processes go into dstate still.
> 
> [root@core-01 gfs2-00]# ps aux | grep ping
> root     12867  0.0  0.0   3880   520 pts/0    D+   12:18   0:00 ./ping_ping
> -rw /gfs2-00/test_ping_ping 2
> 
> cman-2.03.05-1.fc9.x86_64
> gfs2-utils-2.03.05-1.fc9.x86_64
> drbd-8.2.6-3.x86_64
> drbd-km-2.6.25.11_97.fc9.x86_64-8.2.6-3.x86_64
> 
> How can I do additional debugging to help? Although I partially agree with what
> you say Dave (use shared storage SAN); I still think this issue should not be
> dismissed.

Hi,

what kind of network setup do you?

Are you using the same network interface for both cman/dlm and drbd traffic?

Can you try to separate them using 2 different lans?

I wonder if the load generated could cause packet loss..

Comment 11 Adrian A. Sender 2008-08-08 05:23:36 UTC
Initially when testing I eliminated network connectivity / load issues. All devices are on independent links all GB - I have tried alternating them too; however same result. 

Regards,

Adrian Sender.

Comment 12 Adrian A. Sender 2008-08-08 05:57:17 UTC
Strace
------

Although this may not be of significance; I have provided an strace output on the ping_pong process when running on both nodes.


fcntl(3, F_SETLKW, {type=F_UNLCK, whence=SEEK_SET, start=0, len=1}) = 0
fcntl(3, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=1}) = 0
pread(3, "\253", 1, 1)                  = 1
pwrite(3, "\254", 1, 1)                 = 1
fcntl(3, F_SETLKW, {type=F_UNLCK, whence=SEEK_SET, start=1, len=1}) = 0
write(1, "      96 locks/sec\r", 19)    = 19
fcntl(3, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=1, len=1}) = 0
pread(3, "\255", 1, 0)                  = 1
pwrite(3, "\256", 1, 0)                 = 1
fcntl(3, F_SETLKW, {type=F_UNLCK, whence=SEEK_SET, start=0, len=1}) = 0
fcntl(3, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=1} <unfinished ...>

Comment 13 Adrian A. Sender 2008-08-21 06:38:49 UTC
[root@core-01 ~]# ./ping_ping -w /gfs2-00/test_ping_ping 2

[root@core-02 ~]# ./ping_ping -w /gfs2-00/test_ping_ping 2


[root@core-02 ~]# ps -eo state,pid,wchan:20,cmd|egrep -v '[RST]'


D 27808 gdlm_plock           ./ping_ping -w /gfs2-00/test_ping_ping 2


./fs/gfs2/locking/dlm/plock.c:gdlm_plock()

kernel backtrace for the ping-pong process:

ping_ping     D 0000000000000007     0 18815      1

 ffff81003fcb3dc8 0000000000000082 0000000000000001 0000000000000000

 ffffffff8843a1e8 ffffffff814b4e00 ffffffff814b4e00 ffffffff8102b56c

 ffff81003fcb3d68 ffff81000e6f6000 ffff8100748a2000 ffff81000e6f6328

Call Trace:

 [<ffffffff8102b56c>] ? default_wake_function+0xd/0xf

 [<ffffffff8102b893>] ? __wake_up+0x43/0x50

 [<ffffffff88435602>] :lock_dlm:gdlm_plock+0x169/0x1f3

 [<ffffffff81046b87>] ? autoremove_wake_function+0x0/0x38

 [<ffffffff883c71af>] :gfs2:gfs2_lm_plock+0x20/0x22

 [<ffffffff883cd660>] :gfs2:gfs2_lock+0x9d/0xa6

 [<ffffffff810b3519>] fcntl_setlk+0x163/0x2c6

 [<ffffffff8106c6c3>] ? audit_syscall_entry+0x126/0x15a

 [<ffffffff8106c394>] ? audit_syscall_exit+0x331/0x353

 [<ffffffff810afed7>] sys_fcntl+0x28b/0x307

 [<ffffffff8100c052>] tracesys+0xd5/0xda

Comment 14 Steve Whitehouse 2008-09-30 14:35:23 UTC
Adrian, can you reproduce this without drbd? We know that causes problems, so we can't progress this unless you can eliminate that variable I'm afraid. If the problem has gone away, then we can just close this bug.

Comment 15 Adrian A. Sender 2008-10-14 05:29:40 UTC
I will be testing this using kvm virtual clustering technique. This has being proven on a SOFS cluster. 

When I am back from IN next month I will commence testing. Please do not close this bug report until I have confirmed the DRBD component is / is not the issue.

http://git.samba.org/?p=tridge/autocluster.git;a=blob_plain;f=README;hb=master

Comment 16 Steve Whitehouse 2008-11-20 16:08:56 UTC
Adrian, any more details on this yet?

Comment 17 Adrian A. Sender 2008-11-21 01:47:06 UTC
Hi Steve,

I am waiting for Fedora 10 to be released so I can commence testing on this new platform. Will have an update in 1 week.

Comment 18 Adrian A. Sender 2008-12-09 03:24:32 UTC
I have not tried without drbd yet but using latest testing package problem still exists. However this time even when ran on one node I did not achieve any locks on the gfs2 file system and the process going directly any D state.

2.6.27.5-117.fc10.x86_64
cman-2.99.12-2.fc10.x86_64

[root@core-01 ~]# ps -eo state,pid,wchan:20,cmd|egrep -v '[RST]'
D  3048 dlm_posix_unlock     ./ping_ping -rw /gfs2-00/locking_file 2

Comment 19 Adrian A. Sender 2008-12-09 10:29:56 UTC
[root@core-02 ~]# mount -t gfs2 /dev/sdb1 /gfs2-01 -v
/sbin/mount.gfs2: mount /dev/sdb1 /gfs2-01
/sbin/mount.gfs2: parse_opts: opts = "rw"
/sbin/mount.gfs2:   clear flag 1 for "rw", flags = 0
/sbin/mount.gfs2: parse_opts: flags = 0
/sbin/mount.gfs2: parse_opts: extra = ""
/sbin/mount.gfs2: parse_opts: hostdata = ""
/sbin/mount.gfs2: parse_opts: lockproto = ""
/sbin/mount.gfs2: parse_opts: locktable = ""
/sbin/mount.gfs2: lock_dlm_join: hostdata: "hostdata=jid=0:id=3557438219:first=1"
/sbin/mount.gfs2: mount(2) ok
/sbin/mount.gfs2: read_proc_mounts: device = "/dev/sdb1"
/sbin/mount.gfs2: read_proc_mounts: opts = "rw,hostdata=jid=0:id=3557438219:first=1"


Mounting the underlying physical device the problem still happens; dlm_posix_unlock goes into dstate and appears to be broken.

Comment 20 David Teigland 2008-12-09 17:24:50 UTC
Try running dlm_controld and gfs_controld in the foreground with
debugging enabled,

> dlm_controld -DP
> gfs_controld -DP

Then mount and run the plock test and let me know what you see.

Comment 21 Adrian A. Sender 2008-12-10 10:36:49 UTC
[root@core-02 ~]# dlm_controld -DP
1228940923 found /dev/misc/dlm-control minor 59
1228940923 found /dev/misc/dlm_plock minor 58
1228940923 clear_configfs_nodes rmdir "/sys/kernel/config/dlm/cluster/comms/2"
1228940923 cman: node 2 added
1228940923 set_configfs_node 2 192.168.0.3 local 1
1228940923 group_mode 3 compat 2
1228940923 setup_cpg 14
1228940923 set_protocol member_count 1 propose daemon 1.1.1 kernel 1.1.1
1228940923 run protocol from nodeid 2
1228940923 daemon run 1.1.1 max 1.1.1 kernel run 1.1.1 max 1.1.1
1228940923 plocks 18
1228940923 plock cpg message size: 104 bytes
1228940974 client connection 6 fd 19
1228941023 cman: node 1 added
1228941023 set_configfs_node 1 192.168.0.2 local 0
1228941102 uevent: add@/kernel/dlm/gfs2-00
1228941102 kernel: add@ gfs2-00
1228941102 uevent: online@/kernel/dlm/gfs2-00
1228941102 kernel: online@ gfs2-00
1228941102 gfs2-00 add_change cg 1 joined nodeid 2
1228941102 gfs2-00 add_change cg 1 we joined
1228941102 gfs2-00 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
1228941102 gfs2-00 check_fencing done
1228941102 gfs2-00 check_quorum done
1228941102 gfs2-00 check_fs done
1228941102 gfs2-00 send_start cg 1 flags 1 counts 0 1 1 0 0
1228941102 gfs2-00 receive_start 2:1 len 76
1228941102 gfs2-00 match_change 2:1 matches cg 1
1228941102 gfs2-00 wait_messages cg 1 got all 1
1228941102 gfs2-00 start_kernel cg 1 member_count 1
1228941102 write "1069402490" to "/sys/kernel/dlm/gfs2-00/id"
1228941102 set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/gfs2-00/nodes/2"
1228941102 write "1" to "/sys/kernel/dlm/gfs2-00/control"
1228941102 write "0" to "/sys/kernel/dlm/gfs2-00/event_done"
1228941103 uevent: add@/kernel/dlm/gfs2-01
1228941103 kernel: add@ gfs2-01
1228941103 uevent: online@/kernel/dlm/gfs2-01
1228941103 kernel: online@ gfs2-01
1228941103 gfs2-01 add_change cg 1 joined nodeid 2
1228941103 gfs2-01 add_change cg 1 we joined
1228941103 gfs2-01 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
1228941103 gfs2-01 check_fencing done
1228941103 gfs2-01 check_quorum done
1228941103 gfs2-01 check_fs done
1228941103 gfs2-01 send_start cg 1 flags 1 counts 0 1 1 0 0
1228941103 gfs2-01 receive_start 2:1 len 76
1228941103 gfs2-01 match_change 2:1 matches cg 1
1228941103 gfs2-01 wait_messages cg 1 got all 1
1228941103 gfs2-01 start_kernel cg 1 member_count 1
1228941103 write "648476731" to "/sys/kernel/dlm/gfs2-01/id"
1228941103 set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/gfs2-01/nodes/2"
1228941103 write "1" to "/sys/kernel/dlm/gfs2-01/control"
1228941103 write "0" to "/sys/kernel/dlm/gfs2-01/event_done"
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 receive own 30222 from 2 owner 2
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 1-1 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 1-1 2/4175/ffff8800711ed500 w 1
1228941129 gfs2-01 read plock 30222 UN - 0-0 2/4175/ffff8800711ed500 w 0
1228941129 gfs2-01 read plock 30222 LK WR 0-0 2/4175/ffff8800711ed500 w 1
- stops here



[root@core-02 ~]# gfs_controld -DP
1228940974 /cluster/gfs_controld/@plock_rate_limit is 0
1228940974 /cluster/gfs_controld/@plock_ownership is 1
1228940974 groupd 13
1228940974 group_mode 3 compat 2
1228940974 setup_cpg 14
1228940974 set_protocol member_count 1 propose daemon 1.1.1 kernel 1.1.1
1228940974 run protocol from nodeid 2
1228940974 daemon run 1.1.1 max 1.1.1 kernel run 1.1.1 max 1.1.1
1228941102 client connection 6 fd 17
1228941102 join: /gfs2-00 gfs2 lock_dlm hardcore:gfs2-00 rw /dev/drbd0
1228941102 gfs2-00 join: cluster name matches: hardcore
1228941102 gfs2-00 process_dlmcontrol register nodeid 0 result 0
1228941102 gfs2-00 add_change cg 1 joined nodeid 2
1228941102 gfs2-00 add_change cg 1 we joined
1228941102 gfs2-00 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
1228941102 gfs2-00 wait_conditions skip for zero started_count
1228941102 gfs2-00 send_start cg 1 id_count 1 om 0 nm 1 oj 0 nj 0
1228941102 gfs2-00 receive_start 2:1 len 92
1228941102 gfs2-00 match_change 2:1 matches cg 1
1228941102 gfs2-00 wait_messages cg 1 got all 1
1228941102 gfs2-00 pick_first_recovery_master low 2 old 0
1228941102 gfs2-00 sync_state all_nodes_new first_recovery_needed master 2
1228941102 gfs2-00 create_old_nodes all new
1228941102 gfs2-00 create_new_nodes 2 ro 0 spect 0
1228941102 gfs2-00 create_failed_journals all new
1228941102 gfs2-00 create_new_journals 2 gets jid 0
1228941102 gfs2-00 apply_recovery first start_kernel
1228941102 gfs2-00 start_kernel cg 1 member_count 1
1228941102 gfs2-00 set /sys/fs/gfs2/hardcore:gfs2-00/lock_module/block to 0
1228941102 gfs2-00 set open /sys/fs/gfs2/hardcore:gfs2-00/lock_module/block error -1 2
1228941102 gfs2-00 client_reply_join_full ci 6 result 0 hostdata=jid=0:id=3440443978:first=1
1228941102 client_reply_join gfs2-00 ci 6 result 0
1228941102 uevent: add@/fs/gfs2/hardcore:gfs2-00
1228941102 kernel: add@ hardcore:gfs2-00
1228941102 uevent: add@/fs/gfs2/hardcore:gfs2-00/lock_module
1228941102 kernel: add@ hardcore:gfs2-00
1228941102 gfs2-00 ping_kernel_mount 0
1228941102 uevent: change@/fs/gfs2/hardcore:gfs2-00/lock_module
1228941102 kernel: change@ hardcore:gfs2-00
1228941102 gfs2-00 recovery_uevent jid 0 first recovery done 0
1228941103 uevent: change@/fs/gfs2/hardcore:gfs2-00/lock_module
1228941103 kernel: change@ hardcore:gfs2-00
1228941103 gfs2-00 recovery_uevent jid 1 first recovery done 0
1228941103 gfs2-00 recovery_uevent first_done
1228941103 uevent: change@/fs/gfs2/hardcore:gfs2-00/lock_module
1228941103 kernel: change@ hardcore:gfs2-00
1228941103 gfs2-00 recovery_uevent jid 1 first recovery done 1
1228941103 gfs2-00 receive_first_recovery_done from 2 master 2 mount_client_notified 1
1228941103 gfs2-00 wait_recoveries done
1228941103 mount_done: gfs2-00 result 0
1228941103 connection 6 read error -1
1228941103 gfs2-00 receive_mount_done from 2 result 0
1228941103 gfs2-00 wait_recoveries done
1228941103 client connection 6 fd 17
1228941103 join: /gfs2-01 gfs2 lock_dlm hardcore:gfs2-01 rw /dev/drbd1
1228941103 gfs2-01 join: cluster name matches: hardcore
1228941103 gfs2-01 process_dlmcontrol register nodeid 0 result 0
1228941103 gfs2-01 add_change cg 1 joined nodeid 2
1228941103 gfs2-01 add_change cg 1 we joined
1228941103 gfs2-01 add_change cg 1 counts member 1 joined 1 remove 0 failed 0
1228941103 gfs2-01 wait_conditions skip for zero started_count
1228941103 gfs2-01 send_start cg 1 id_count 1 om 0 nm 1 oj 0 nj 0
1228941103 gfs2-01 receive_start 2:1 len 92
1228941103 gfs2-01 match_change 2:1 matches cg 1
1228941103 gfs2-01 wait_messages cg 1 got all 1
1228941103 gfs2-01 pick_first_recovery_master low 2 old 0
1228941103 gfs2-01 sync_state all_nodes_new first_recovery_needed master 2
1228941103 gfs2-01 create_old_nodes all new
1228941103 gfs2-01 create_new_nodes 2 ro 0 spect 0
1228941103 gfs2-01 create_failed_journals all new
1228941103 gfs2-01 create_new_journals 2 gets jid 0
1228941103 gfs2-01 apply_recovery first start_kernel
1228941103 gfs2-01 start_kernel cg 1 member_count 1
1228941103 gfs2-01 set /sys/fs/gfs2/hardcore:gfs2-01/lock_module/block to 0
1228941103 gfs2-01 set open /sys/fs/gfs2/hardcore:gfs2-01/lock_module/block error -1 2
1228941103 gfs2-01 client_reply_join_full ci 6 result 0 hostdata=jid=0:id=3557438219:first=1
1228941103 client_reply_join gfs2-01 ci 6 result 0
1228941103 uevent: add@/fs/gfs2/hardcore:gfs2-01
1228941103 kernel: add@ hardcore:gfs2-01
1228941103 uevent: add@/fs/gfs2/hardcore:gfs2-01/lock_module
1228941103 kernel: add@ hardcore:gfs2-01
1228941103 gfs2-01 ping_kernel_mount 0
1228941103 uevent: change@/fs/gfs2/hardcore:gfs2-01/lock_module
1228941103 kernel: change@ hardcore:gfs2-01
1228941103 gfs2-01 recovery_uevent jid 0 first recovery done 0
1228941103 uevent: change@/fs/gfs2/hardcore:gfs2-01/lock_module
1228941103 kernel: change@ hardcore:gfs2-01
1228941103 gfs2-01 recovery_uevent jid 1 first recovery done 0
1228941103 gfs2-01 recovery_uevent first_done
1228941103 uevent: change@/fs/gfs2/hardcore:gfs2-01/lock_module
1228941103 kernel: change@ hardcore:gfs2-01
1228941103 gfs2-01 recovery_uevent jid 1 first recovery done 1
1228941103 gfs2-01 receive_first_recovery_done from 2 master 2 mount_client_notified 1
1228941103 gfs2-01 wait_recoveries done
1228941103 mount_done: gfs2-01 result 0
1228941103 connection 6 read error -1
1228941103 gfs2-01 receive_mount_done from 2 result 0
1228941103 gfs2-01 wait_recoveries done

Comment 22 David Teigland 2008-12-10 14:47:08 UTC
Thanks, a few things:

- There's a bug in the ownership mode that makes lock state on multiple
  nodes become out of sync; it should be fixed in the next package update.
  It's bug 474163 and is a trivial fix.

- Your config has <gfs_controld plock_ownership="1" plock_rate_limit="0"/>
  which is correct, except for something I've forgotten to do.  The plock
  code has become a part of dlm_controld, but still exists in gfs_controld
  also for backward compatibility.  The options for dlm_controld are
  <dlm plock_ownership="1" plock_rate_limit="0"/>.  What I've forgotten to
  do is make dlm_controld look for the plock options under both <dlm/>
  and <gfs_controld/>, and the same for gfs_controld.  In you're case,
  the dlm_controld plock code isn't seeing your plock config options under
  <gfs_controld/>, so you should change it to <dlm plock.. />.

- The plocks appear to be working fine, and I'm not sure why it stops.
  I've been doing a fair amount of plock testing recently and haven't
  seen this problem, I'll try your test program on my own cluster.

Comment 23 David Teigland 2008-12-11 19:26:55 UTC
The problem is with the rate-limiting, which fails to re-enable
plock processing after disabling due to the rate limit.  If you
disable the rate-limiting with <dlm plock_rate_lmit="0"/> (see
previous comment), then it should work.

Comment 24 David Teigland 2008-12-11 19:31:33 UTC
There have been a variety of problems in the history
of this bz. I'm redefining the bz to represent the one
specific problem we're currently hitting.

Comment 25 Adrian A. Sender 2008-12-15 09:04:03 UTC
With <dlm plock_ownership="1" plock_rate_limit="0"/> 

[root@core-01 ~]# ./ping_pong -rw /gfs2-01/test1 2
data increment = 1
   45952 locks/sec
- start locking on core-02, on core-01 locks/sec drops
data increment = 1
data increment = 9
data increment = 1
  2145 locks/sec

- on core-02 process goes into D+ state

Comment 26 David Teigland 2008-12-17 17:51:48 UTC
See the info on using ping_pong at http://wiki.samba.org/index.php/Ping_pong.
The last value N needs to be at least 1 more than the number of nodes, so
in your case it needs to be at least 3.  Otherwise the two ping_pong
processes deadlock; run "group_tool dump plocks <fsname>" and the deadlock
will look something like this:

# group_tool dump plocks x
28 WR 1-1 nodeid 1 pid 21165 owner ffff880029f259c0
28 WR 0-0 nodeid 2 pid 21132 owner ffff88002484b6c0
28 WAITING WR 0-0 nodeid 1 pid 21165 owner ffff880029f259c0
28 WAITING WR 1-1 nodeid 2 pid 21132 owner ffff88002484b6c0

Comment 28 Adrian A. Sender 2009-04-13 07:31:12 UTC
Hi Dave, I was able to get my test lab plugged back in today so I confirmed the above.

[root@core-01 ~]# ./ping_ping -rw /gfs2-01/test 3
data increment = 1
 43117 locks/sec

[root@core-02 ~]# ./ping_pong -rw /gfs2-01/test1 3
data increment = 1
45230 locks/sec

[root@core-01 ~]# ./ping_ping -rw /gfs2-01/test 3
data increment = 1
data increment = 2
     2 locks/sec
[root@core-02 ~]# ./ping_pong -rw /gfs2-01/test 3
     2 locks/sec

[root@core-02 ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster name="hardcore" config_version="2">  
  <dlm plock_ownership="1" plock_rate_limit="0"/>
   <cman two_node="1" expected_votes="1">
    </cman>
    <clusternodes>
      <clusternode name="core-01" votes="1" nodeid="1">
       <fence>
        <method name="single">
         <device name="human" ipaddr="192.168.0.2"/>
       </method>
      </fence>
     </clusternode>
     <clusternode name="core-02" votes="1" nodeid="2">
      <fence>
       <method name="single">
         <device name="human" ipaddr="192.168.0.3"/>
       </method>
      </fence>
    </clusternode>
   </clusternodes>
   <fence_devices>
   <fence_device name="human" agent="fence_manual"/> 
  </fence_devices>
 </cluster>

Comment 29 Adrian A. Sender 2009-04-13 08:30:34 UTC
I just updated to the latest version and now experiencing a new problem.

[root@core-01 ~]# /etc/init.d/cman start
Starting cluster: 
   Loading modules... done
   Mounting configfs... done
   Setting network parameters... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]

[root@core-01 cluster]# tail -f /var/log/cluster/cman.log

[root@core-01 cluster]# cat /var/log/cluster/cman.log



[MAIN ] Corosync Executive Service RELEASE 'trunk'
[MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors.
[MAIN ] Copyright (C) 2006-2008 Red Hat, Inc.
[MAIN ] Corosync Executive Service: started and ready to provide service.
[MAIN ] Successfully read config from /etc/cluster/cluster.conf
[MAIN ] Successfully parsed cman config
[MAIN ] Successfully configured openais services to load
[TOTEM] Token Timeout (10000 ms) retransmit timeout (495 ms)
[TOTEM] token hold (386 ms) retransmits before loss (20 retrans)
[TOTEM] join (60 ms) send_join (0 ms) consensus (4800 ms) merge (200 ms)
[TOTEM] downcheck (1000 ms) fail to recv const (50 msgs)
[TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1500
[TOTEM] window size per rotation (50 messages) maximum messages per rotation (17 messages)
[TOTEM] send threads (0 threads)
[TOTEM] RRP token expired timeout (495 ms)
[TOTEM] RRP token problem counter (2000 ms)
[TOTEM] RRP threshold (10 problem count)
[TOTEM] RRP mode set to none.
[TOTEM] heartbeat_failures_allowed (0)
[TOTEM] max_network_delay (50 ms)
[TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
[TOTEM] Receive multicast socket recv buffer size (288000 bytes).
[TOTEM] Transmit multicast socket send buffer size (262142 bytes).
[TOTEM] The network interface [192.168.0.2] is now up.
[TOTEM] Created or loaded sequence id 284.192.168.0.2 for this ring.
[TOTEM] entering GATHER state from 15.
[SERV ] Service initialized 'corosync CMAN membership service 2.90'
[SERV ] Service initialized 'openais cluster membership service B.01.01'
[SERV ] Service initialized 'openais event service B.01.01'
[SERV ] Service initialized 'openais checkpoint service B.01.01'
[SERV ] Service initialized 'openais availability management framework B.01.01'
[SERV ] Service initialized 'openais message service B.01.01'
[SERV ] Service initialized 'openais distributed locking service B.01.01'
[SERV ] Service initialized 'corosync extended virtual synchrony service'
[SERV ] Service initialized 'corosync configuration service'
[SERV ] Service initialized 'corosync cluster closed process group service v1.01'
[SERV ] Service initialized 'corosync cluster config database access v1.01'
[SYNC ] Not using a virtual synchrony filter.
[TOTEM] Creating commit token because I am the rep.
[TOTEM] Saving state aru 0 high seq received 0
[TOTEM] Storing new sequence id for ring 120
[TOTEM] entering COMMIT state.
[TOTEM] entering RECOVERY state.
[TOTEM] position [0] member 192.168.0.2:
[TOTEM] previous ring seq 284 rep 192.168.0.2
[TOTEM] aru 0 high delivered 0 received flag 1
[TOTEM] Did not need to originate any messages in recovery.
[TOTEM] Sending initial ORF token
[CLM  ] CLM CONFIGURATION CHANGE
[CLM  ] New Configuration:
[CLM  ] Members Left:
[CLM  ] Members Joined:
[CLM  ] CLM CONFIGURATION CHANGE
[CLM  ] New Configuration:
[CLM  ] 	r(0) ip(192.168.0.2) 
[CLM  ] Members Left:
[CLM  ] Members Joined:
[CLM  ] 	r(0) ip(192.168.0.2) 
[SYNC ] This node is within the primary component and will provide service.
[TOTEM] entering OPERATIONAL state.
[CLM  ] got nodejoin message 192.168.0.2
[confdb.c:0271] lib_init_fn: conn=0x1b7e620
[confdb.c:0327] object_find_destroy for conn=0x1b7e620, 46
[confdb.c:0327] object_find_destroy for conn=0x1b7e620, 47
[confdb.c:0277] exit_fn for conn=0x1b7e620
[confdb.c:0271] lib_init_fn: conn=0x1b7e900
[confdb.c:0327] object_find_destroy for conn=0x1b7e900, 48
[confdb.c:0327] object_find_destroy for conn=0x1b7e900, 49
[confdb.c:0327] object_find_destroy for conn=0x1b7e900, 50
[confdb.c:0327] object_find_destroy for conn=0x1b7e900, 51
[confdb.c:0277] exit_fn for conn=0x1b7e900
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0100f60
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0100f60, 52
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0100f60, 53
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0100f60, 54
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0100f60, 55
[confdb.c:0277] exit_fn for conn=0x7f4ce0100f60
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0204330
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0204330, 56
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0204330, 57
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0204330, 58
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0204330, 59
[confdb.c:0277] exit_fn for conn=0x7f4ce0204330
[confdb.c:0271] lib_init_fn: conn=0x7f4ce01042f0
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042f0, 60
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042f0, 61
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042f0, 62
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042f0, 63
[confdb.c:0277] exit_fn for conn=0x7f4ce01042f0
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0000f20
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 64
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 65
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 66
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 67
[confdb.c:0277] exit_fn for conn=0x7f4ce0000f20
[confdb.c:0271] lib_init_fn: conn=0x7f4ce01042c0
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 68
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 69
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 70
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 71
[confdb.c:0277] exit_fn for conn=0x7f4ce01042c0
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0000f20
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 72
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 73
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 74
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 75
[confdb.c:0277] exit_fn for conn=0x7f4ce0000f20
[confdb.c:0271] lib_init_fn: conn=0x7f4ce01042c0
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 76
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 77
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 78
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 79
[confdb.c:0277] exit_fn for conn=0x7f4ce01042c0
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0000f20
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 80
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 81
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 82
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 83
[confdb.c:0277] exit_fn for conn=0x7f4ce0000f20
[confdb.c:0271] lib_init_fn: conn=0x7f4ce01042c0
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 84
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 85
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 86
[confdb.c:0327] object_find_destroy for conn=0x7f4ce01042c0, 87
[confdb.c:0277] exit_fn for conn=0x7f4ce01042c0
[confdb.c:0271] lib_init_fn: conn=0x7f4ce0000f20
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 88
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 89
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce0000f20, 91
[confdb.c:0277] exit_fn for conn=0x7f4ce0000f20
[confdb.c:0271] lib_init_fn: conn=0x1b84b80
[confdb.c:0327] object_find_destroy for conn=0x1b84b80, 92
[confdb.c:0327] object_find_destroy for conn=0x1b84b80, 93
[confdb.c:0277] exit_fn for conn=0x1b84b80
[confdb.c:0271] lib_init_fn: conn=0x1b84a00
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 94
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 95
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 96
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 97
[confdb.c:0277] exit_fn for conn=0x1b84a00
[confdb.c:0271] lib_init_fn: conn=0x1b84e60
[confdb.c:0327] object_find_destroy for conn=0x1b84e60, 98
[confdb.c:0327] object_find_destroy for conn=0x1b84e60, 99
[confdb.c:0327] object_find_destroy for conn=0x1b84e60, 100
[confdb.c:0327] object_find_destroy for conn=0x1b84e60, 101
[confdb.c:0277] exit_fn for conn=0x1b84e60
[confdb.c:0271] lib_init_fn: conn=0x1b84a00
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 102
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 103
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 104
[confdb.c:0327] object_find_destroy for conn=0x1b84a00, 105
[confdb.c:0277] exit_fn for conn=0x1b84a00
[confdb.c:0271] lib_init_fn: conn=0x7f4ce030ac90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 106
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 107
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 108
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 109
[confdb.c:0277] exit_fn for conn=0x7f4ce030ac90
[confdb.c:0271] lib_init_fn: conn=0x7f4ce030ac90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 110
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 111
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 112
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 113
[confdb.c:0277] exit_fn for conn=0x7f4ce030ac90
[confdb.c:0271] lib_init_fn: conn=0x7f4ce030ac90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 114
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 115
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 116
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 117
[confdb.c:0277] exit_fn for conn=0x7f4ce030ac90
[confdb.c:0271] lib_init_fn: conn=0x7f4ce030ac90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 118
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 119
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 120
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 121
[confdb.c:0277] exit_fn for conn=0x7f4ce030ac90
[confdb.c:0271] lib_init_fn: conn=0x7f4ce030ac90
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 122
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 123
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 124
[confdb.c:0327] object_find_destroy for conn=0x7f4ce030ac90, 125
[confdb.c:0277] exit_fn for conn=0x7f4ce030ac90
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 126
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 127
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 128
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 129
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 130
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 131
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 132
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 133
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 134
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 135
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 136
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 137
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 138
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 139
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 140
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 141
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 142
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 143
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 144
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 145
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 146
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 147
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 148
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 149
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 150
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 151
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 152
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 153
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 154
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 155
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 156
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 157
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 158
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 159
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 160
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 161
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 162
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 163
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 164
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 165
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 166
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 167
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 168
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 169
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 170
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 171
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 172
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 173
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 174
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 175
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 176
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 177
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 178
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 179
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 180
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 181
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 182
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 183
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 184
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 185
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 186
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 187
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 188
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 189
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 190
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 191
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 192
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 193
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 194
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 195
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 196
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 197
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 198
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 199
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 200
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 201
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 202
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 203
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 204
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 205
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 206
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 207
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b801c0
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 208
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 209
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 210
[confdb.c:0327] object_find_destroy for conn=0x1b801c0, 211
[confdb.c:0277] exit_fn for conn=0x1b801c0
[confdb.c:0271] lib_init_fn: conn=0x1b80460
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 212
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 213
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 214
[confdb.c:0327] object_find_destroy for conn=0x1b80460, 215
[confdb.c:0277] exit_fn for conn=0x1b80460
[confdb.c:0271] lib_init_fn: conn=0x1b7c2e0
[confdb.c:0327] object_find_destroy for conn=0x1b7c2e0, 216
[confdb.c:0327] object_find_destroy for conn=0x1b7c2e0, 217
[confdb.c:0277] exit_fn for conn=0x1b7c2e0
[confdb.c:0271] lib_init_fn: conn=0x1b7d950
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 218
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 219
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 220
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 221
[confdb.c:0277] exit_fn for conn=0x1b7d950
[confdb.c:0271] lib_init_fn: conn=0x1b7d7d0
[confdb.c:0327] object_find_destroy for conn=0x1b7d7d0, 222
[confdb.c:0327] object_find_destroy for conn=0x1b7d7d0, 223
[confdb.c:0327] object_find_destroy for conn=0x1b7d7d0, 224
[confdb.c:0327] object_find_destroy for conn=0x1b7d7d0, 225
[confdb.c:0277] exit_fn for conn=0x1b7d7d0
[confdb.c:0271] lib_init_fn: conn=0x1b7d950
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 226
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 227
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 228
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 229
[confdb.c:0277] exit_fn for conn=0x1b7d950
[confdb.c:0271] lib_init_fn: conn=0x1b7d950
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 230
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 231
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 232
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 233
[confdb.c:0277] exit_fn for conn=0x1b7d950
[confdb.c:0271] lib_init_fn: conn=0x1b7d950
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 234
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 235
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 236
[confdb.c:0327] object_find_destroy for conn=0x1b7d950, 237
[confdb.c:0277] exit_fn for conn=0x1b7d950
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 238
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 239
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 242
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 243
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1b7d930
[confdb.c:0327] object_find_destroy for conn=0x1b7d930, 244
[confdb.c:0327] object_find_destroy for conn=0x1b7d930, 245
[confdb.c:0327] object_find_destroy for conn=0x1b7d930, 246
[confdb.c:0327] object_find_destroy for conn=0x1b7d930, 247
[confdb.c:0277] exit_fn for conn=0x1b7d930
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 248
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 249
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 250
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 251
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 252
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 253
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 254
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 255
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 256
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 257
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 258
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 259
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1e92cc0
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 260
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 261
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 262
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 263
[confdb.c:0277] exit_fn for conn=0x1e92cc0
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 264
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 265
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 266
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 267
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1e92cc0
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 268
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 269
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 270
[confdb.c:0327] object_find_destroy for conn=0x1e92cc0, 271
[confdb.c:0277] exit_fn for conn=0x1e92cc0
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 272
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 273
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 274
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 275
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 276
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 277
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 278
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 279
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 280
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 281
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 282
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 283
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1e931f0
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 284
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 285
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 286
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 287
[confdb.c:0277] exit_fn for conn=0x1e931f0
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 288
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 289
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1e931f0
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 290
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 291
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 292
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 293
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 294
[confdb.c:0277] exit_fn for conn=0x1e931f0
[confdb.c:0271] lib_init_fn: conn=0x1b7c410
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 295
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 296
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 297
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 298
[confdb.c:0327] object_find_destroy for conn=0x1b7c410, 299
[confdb.c:0277] exit_fn for conn=0x1b7c410
[confdb.c:0271] lib_init_fn: conn=0x1e931f0
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 300
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 301
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 302
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 303
[confdb.c:0327] object_find_destroy for conn=0x1e931f0, 304
[confdb.c:0277] exit_fn for conn=0x1e931f0
[TOTEM] entering GATHER state from 11.
[TOTEM] Creating commit token because I am the rep.
[TOTEM] Saving state aru 1a high seq received 1a
[TOTEM] Storing new sequence id for ring 128
[TOTEM] entering COMMIT state.
[TOTEM] entering RECOVERY state.
[TOTEM] position [0] member 192.168.0.2:
[TOTEM] previous ring seq 288 rep 192.168.0.2
[TOTEM] aru 1a high delivered 1a received flag 1
[TOTEM] position [1] member 192.168.0.3:
[TOTEM] previous ring seq 292 rep 192.168.0.3
[TOTEM] aru a high delivered a received flag 1
[TOTEM] Did not need to originate any messages in recovery.
[TOTEM] Sending initial ORF token
[CLM  ] CLM CONFIGURATION CHANGE
[CLM  ] New Configuration:
[CLM  ] 	r(0) ip(192.168.0.2) 
[CLM  ] Members Left:
[CLM  ] Members Joined:
[CLM  ] CLM CONFIGURATION CHANGE
[CLM  ] New Configuration:
[CLM  ] 	r(0) ip(192.168.0.2) 
[CLM  ] 	r(0) ip(192.168.0.3) 
[CLM  ] Members Left:
[CLM  ] Members Joined:
[CLM  ] 	r(0) ip(192.168.0.3) 
[SYNC ] This node is within the primary component and will provide service.
[TOTEM] entering OPERATIONAL state.
[CLM  ] got nodejoin message 192.168.0.2
[CLM  ] got nodejoin message 192.168.0.3
[CPG  ] got joinlist message from node 1
[confdb.c:0271] lib_init_fn: conn=0x1cb5120
[confdb.c:0327] object_find_destroy for conn=0x1cb5120, 305
[confdb.c:0327] object_find_destroy for conn=0x1cb5120, 306
[confdb.c:0277] exit_fn for conn=0x1cb5120
[confdb.c:0271] lib_init_fn: conn=0x1cb4e80
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 307
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 308
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 309
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 310
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 311
[confdb.c:0277] exit_fn for conn=0x1cb4e80
[confdb.c:0271] lib_init_fn: conn=0x1cb4e80
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 312
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 313
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 314
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 315
[confdb.c:0277] exit_fn for conn=0x1cb4e80
[confdb.c:0271] lib_init_fn: conn=0x1cb4e80
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 316
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 317
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 318
[confdb.c:0327] object_find_destroy for conn=0x1cb4e80, 319
[confdb.c:0277] exit_fn for conn=0x1cb4e80
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 320
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 321
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 322
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 323
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 324
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 325
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 326
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 327
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 328
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 329
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 330
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 331
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 332
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 333
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 334
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 335
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 336
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 337
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 338
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 339
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 340
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 341
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 342
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 343
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 344
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 345
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 346
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 347
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 348
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 349
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 350
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 351
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 352
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 353
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 354
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 355
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 356
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 357
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 358
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 359
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 360
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 361
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 362
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 363
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 364
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 365
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 366
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 367
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 368
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 369
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 370
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 371
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 372
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 373
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 374
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 375
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 376
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 377
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 378
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 379
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 380
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 381
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 382
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 383
[confdb.c:0277] exit_fn for conn=0x1e94e60
[confdb.c:0271] lib_init_fn: conn=0x1e94e60
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 384
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 385
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 386
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 387
[confdb.c:0327] object_find_destroy for conn=0x1e94e60, 388
[confdb.c:0277] exit_fn for conn=0x1e94e60

[root@core-01 ~]# /etc/init.d/cman status
ccsd is stopped

Comment 30 Bug Zapper 2009-11-18 07:45:38 UTC
This message is a reminder that Fedora 10 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 10.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '10'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 10's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 10 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping


Note You need to log in before you can comment on or make changes to this bug.